[MUSIC PLAYING] PETER-MARK VERWOERD:
Good afternoon, everyone. Welcome to the last session
of the first day of Next. Hope you guys have
had a good time, and I hope you
enjoy this session for our last one of the day. I am Peter-Mark Verwoerd. I'm the Head of
Migration Architecture here at Google, based
out of New York. I'm going to be working with-- or I'm going to be speaking
with, beg your pardon, Rae Wang, Jeff Allen
and Morgante Pell. And what we're going to do
is go over cloud foundation and how to automate that
foundation in the cloud. And what that is is
a-- a cloud foundation is what you need to create
before you move or create a single compute
resource in the cloud. It's what you need to make sure
everything's working properly. I like to use the
analogy of a house because I think it
really matches up what you're doing in the cloud. If you think about it,
the cloud foundation is what you need before you
start putting in windows, putting in doors, painting-- all these things. You need a solid foundation,
you need plumbing, you need wiring before you
build a house that's solid. And what we want
to do is really go over what will make your
house in the cloud solid. So as you can probably
guess from my title, I work with a lot of customers
who are moving to GCP. And when I work with
these customers, we follow a relatively
prescriptive process. And the reason we do
this is because it works. It probably doesn't seem
particularly different from things you've seen
at other companies, but consistency around investing
in our customers in this plan allows us to do what
we do over and over and over again, and
repeat that success. So when we first engage
with customers, we come in and we figure out
what you've got. We assess the challenges,
limitations of the systems that you have
today, and find out what your goals, hopes,
plans, and requirements are for the future. And this is how we
get, and when we plan what you're going to migrate. This is before anything
is done in GCP. This is just a conversation
between us and our customers. After that, we build
the foundation. We design how we
are going to take the solution and technologies
available in Google Cloud and connect them to the
problems, environments, and applications
that you have today. And then we establish
a place to build all of those applications in
a secure and agile manner. And then we help
you deploy that. We're never going to go away. So we are working together
on a deployment plan to make sure it's successful,
to make sure we follow through with the things that
we plan together, that we discuss together, that
we created in the foundation together, and make sure
that you have access to technical
resources on our side so that those systems work
how they're supposed to. And we work through performance. We work through management. We work through
operations of all of that. And then we're still not done. Then we start talking
about optimization. We start looking at, what
are the long-term plans? What could you do
with what you have? Where would you like to improve? What is not working as
well as you would like to? What are the new
things that you'd like to try out that you
haven't had a chance to try out? And this is a process
that continues as our relationship
with you continues. This is not something that
happens and then we're done. We keep on working with you. And so the optimization
is the last step, but it is the last
step that is ongoing throughout our relationship
with our customers. So today, though,
we are just going to focus on that second
step, on the foundation. When I engage with
customers, I tend to harp on this quite a bit. Because I feel like
a solid foundation really gives you the
flexibility and possibility to have a successful migration. And that requires doing a
lot of that work upfront. And so while we're
just going to focus on the foundation
in this session, there are a lot of other
sessions happening at Next that you can attend
that are going to cover different aspects of
the overall migration journey. And we've got a few suggestions
at the end of this session that you might be interested in. So let's get to it. What we're going to cover
today is the minute detail of how we're going to
build out a foundation. We're going to cover identity
and organization in the cloud. And then once you have that
part of the foundation, then you start adding, onto
that, network and security. And so these are kind of the
very core, important parts of your foundation. This is what you need to have-- again, what I said-- the
secure and agile place for applications to reside. And then when we're
going to do is, once we've covered all that,
we're going to show you how you could automate
that entire process, and have it spun up in a
repeatable way with code that you can check in,
that you can version, that you can build on, and
have a foundational cloud that you can automate. So with that, I'd like to
hand it off to my colleague, Rae Wang, who will discuss
identity and organization. Thank you. RAE WANG: Hi, my name is Rae. I lead the product team at
GCP to work on [INAUDIBLE] and Policy Management. As Peter-Mark just said,
setting up identity, organization, and
access policies are the first steps
to a GCP migration. So let's walk through them. Now, you want to make sure
that the data and resources you have on GCP are very secure. So for anybody to
access them, first they have to be authenticated
and authorized. Authentication and authorization
work hand-in-hand, pretty much like your passport and visa. Authentication
proves who you are, and authorization proves
what you have access to. They are also often known
as the identity and access management, IAM. So for you to authenticate
to talk to GCP, you have to have an identity
that's understood by GCP. The same identity that you also
use to access Gmail and Docs can be used to access
GCP APIs, as well. And if you are an
Enterprise customer, you need to do more to
actually manage and track the identities. And if you're
looking for an IDP, one great option here is
to use Cloud identity. Cloud identity gives
you a central place to create, curate, and manage
the credentials of a company. If you're a GCP-only
customer, you could take advantage
of cloud identities without having to
sign up to G-Suite. Now if you use cloud identities,
because Google manages your login, you get the benefit
of secure login, password reset, session device tracking,
and can know where and when people have logged in. And because Google
tracks the logins, Google can detect
suspicious activities, and it can be configured to
alert you when they happen. So cloud identity
is a great option if you're [INAUDIBLE] a cloud
company, don't have an existing IDP, and you're looking
for a simple solution. But what if you already
have invested in an IDP, and you have a process
that works for you, you want to continue to use? You can continue to
use your existing IDP with GCP via Federation
and single sign-on. One single sign-on,
SSO, is configured. Whenever one of your
identities comes to GCP, they will be redirected to
your IDP for authentication. And because it's pretty common
that your user has already been authenticated
in your own system, that means they will get
instant access to GCP APIs. So for SSO to work, the other
thing that needs to exist is the users that
you redirect to GCP, they need to have an account
that's known to Google, and already exists in Google. To help with that, we have a
tool called GCDS, Google Cloud Directory Sync, that will
help you to automatically sync all of your users
and groups into Google. Using Federation with SSO is
a very common configuration used by most of our
largest customers, with some syncing up to a few
hundred thousand accounts. So how do you set it up? Well, setting up
SSO is very simple. There's three links to your
existing identity system. And SSO is built
on top of SAML 2. As some of you may know, SAML 2
is the secure industry protocol for exchanging user credentials. And the user [INAUDIBLE] we care
about here is the user names. Now, SAML 2 also
offers the option to digitally sign information
to prevent tampering. To do that, you can
provide the information on the digital certificate
used to verify your signature. When you do that,
you can make sure that all the information
coming through SSO is indeed coming from you and
has not being tampered with. And what if your system
does not support SAML 2? Well, you can also use one
of the third party options, such as [INAUDIBLE],,
to bridge the gap here. So once you use Cloud Identity,
whether as your own IDP, or whether as a
Federation tool, you also get access to other
advanced features that are offered on top of GCP. One such example is BeyondCorp. BeyondCorp is an
enterprise-grade security model that's based on top of the six
years of learning from building zero trust network at
Google, and combined with the best of breed ideas
and practices from industry. And now, BeyondCorp
is available to you as a GCP service called
IDP, Identity Aware Proxy. Using IDP, your administrators
can define which users or groups can-- and
under which context, such as networking
secure device-- can access the
applications you have that are running on top
of GCP, as well. So with all these benefits,
how do you get Cloud Identity? It's very simple to
use as a GCP customer. You go to the GCP Cloud
console, you click on Identity. Here, you see a
page which has links to all kinds of documentations. You can click Navigate to manage
your users and groups, Google Admin Console. And if you click on
the Sign Up button, it will take you
through a guided flow to sign up for Cloud Identity
for your company's domain. At the end of this
flow, a GCP organization will automatically
be provisioned for your Cloud Identity domain. We'll talk more about
organizations a little later. And here, you will
also have the option to go through guided
flows to set up the rest of your
organization-related stuff, such as IAM,
organization policies, and billing for a company. So once you sign up
for Cloud Identity, you have Federation set
up, your users and groups will be synced into Google. And we talked about
users are important, but why do you
need to use Groups? So we all know that,
in a large enterprise, you might have thousands,
or even tens of thousands of users. It would be really
cumbersome if you have to manage each individual
one of their access. So instead, we want to
manage at the group level. Groups provide a level of
abstraction between your users and the resources
they need to access. So let's take an example. If you have a Secop team, they
need the security admin role in your organization so they can
configure the security settings for the organization. And that team has 10 people. So what you want to do
is to create a group, have these 10
members join a group, and then assign a
group security admin role in your organization. Now if, after that, you have
new members joining the group, all you have to do is to
add a user to the group. You don't have to go
and assign the user, and give them the
permissions they need on different resources. They automatically
inherit what a group has. What about if the team's
job function changes? Now they also need
the log viewer role so that they can watch for
critical admin setting changes. All you need to do is
to assign a log viewer role to the Secop team
group in your organization. Again, you won't
have to go and change the setting for the
individual users. You can manage groups either
through a cloud identity, or in your own IDP
via Federation. But in either case,
you want to make sure that you safeguard your
group membership by setting additional policies,
such as the policy to prevent outside identities
from joining your company's domain groups, or the
permissions on who is able to create new
groups of their own. So we talked about
users and groups. There is a third
type of identity that's often used with GCP,
which is service accounts. So unlike users groups,
which are identities for human beings,
your service accounts are the identities for your
automations and programs that are running on top of GCP. Your users can type the
user and password to log in. The service account
is not going to sit in front of a screen type. So instead, they use
keys to authenticate. One or more service account keys
can be created for each service account. And since using these
keys, you basically are granting people access
to your sensitive data and resources, you
want to make sure that you safeguard the keys. If you're running
applications on top of Google with GAE, GCE, or GKE, we
will keep your keys secure, and then automatically
rotate them every few hours. So that way, you don't
have to download keys, and you don't have to risk
losing or leaking your keys. What if you run
applications elsewhere in a hybrid cloud environment? That happens quite often. Well, you can download
the keys to be used from your applications
so your application can access GCP. However, here, you really
need to be careful about where you store those keys. They give access
to sensitive data, so you want to store them in a
secure service, such as maybe a vault. And what
you don't want to do is just to store them
in plain text on GitHub. The other interesting
thing about service account is that it's not only an
identity, it's also a resource. So when you have a VM that's
running as a service account, here, the service
account is acting as the identity-- the VM
assumes the service account identity to get access
to resources and APIs. However, at the
same time, you also want to be able to control
which users, which VMs, are able to assume
this identity. So here, the service
account acts as a resource, and you can grant access
to the service account by granting the service
account [? act as ?] role. Now, a very common way of using
service account is with GCVMs. When you use a service
account to run GCVMs, you can subpartition all
the VMs in your project into different components
with different identities. For example, if you
have one component that needs to access a
[INAUDIBLE] bucket, and a different
component that needs to access a big
query data set, you can assign each one
a different identity, and then grant each identity
the least privilege required. You can change the
service account on VM without having to restart a VM. And later, in the
network section, we're going to talk
about how to use these attributes of
the service accounts to create more secure
firewall rules. So once you have your identity's
domain set up with GCP, an organization is
automatically created for you. The GCP organization is the
root node for all the resources in your company. So all of your VMs,
buckets in the projects, and so on live under
the organization. There's a 1-to-1 relationship
between the identity domain and the GCP organization. And using this
relationship, we're able to enforce rules such
that all the projects, and the resources that are
created by your company's employees, will belong to
your organization node. And this ensures
that there's no more [INAUDIBLE] or hidden projects. The central admin will
have control and visibility to everything. So more about the
resource hierarchy. Well, we talked
about you don't want to manage individual users. You also don't want to manage
individual VMs and storage objects, right? You're going to end up
with thousands of them and, in the case of
objects, millions of them. They're cattle, not pets. So to scale to
operation, you want to be able to run operations and
set policies on a group level. And the most common way of
grouping resources in GCP is through the
research hierarchy. At the bottom level, you have
your VMs and storage objects, and then buckets. Above that, you have projects. Projects are the most basic
unit for ownership and policies in GCP. They are often used
to assimilate a team or environment or service. And about projects,
you have folders. They're hierarchical
and they can be used to model your
departments or organization units. And then, above the folders,
you have the organization node, which is, again, your root node. So once you have set up
your resource hierarchy, it makes a lot of the
organization-wide management tasks much simpler. For example, you can
control who in your company is or is not allowed
to start new projects and create additional resources. You can do so by assigning the
project creator role at each of the levels in a hierarchy. You can also
automate the building of an inventory of your
resources and policies by programmatically traversing
through the org, the folders, projects, their
resources, and so on. Another common
practice is to assign a service account, the
permissions on the org level. So the service account
can go and patrol the configurations
in your organization to enforce compliance. And in fact, this is what
Google's own security team does to manage our
Google.com GCP resources, as well. So resource hierarchy is great. It defines a very strict
ownership hierarchy, and it gives you
policy inheritance. But there many
times you also need to define other
relationships that are outside of the hierarchy. For example, you can
have two departments. You have a storefront team
and an inventory service, each of which can
have a production environment versus a
dev testing environment. How do you define
each environment? One kind of a way of
doing that is by labels. Labels are key value pairs
that are user-defined and can be attached to resources. So in this case, you
can assign environment equal to prod versus dev to the
different projects in the two different departments. Once you have
assigned labels, you can search for resources
based on your labels. And labels are also
included in billing export, so you can use that to
do a cost aggregation for your different
efforts and initiatives. So once you have the resource
hierarchy and labels, we can also define
hierarchical policies. Some examples here-- IAM,
Identity Access Management-- organization policies,
which is about constraining the configurations in a
company, and hierarchical quota. So IAM is the policy that
allows you to tell who can do what on which resources. The who part refers
to identities. And the resource
part can be defined by either individual
resources, like VMs or buckets, or
hierarchical nodes, such as projects and folders. The "can do what"
part, in the middle, is defined by an IM role. An IM role often maps
to a job function. For example, if you are the
instance admin for a team. And it is constructed as
a group of permissions. So here, if you need to
manage all the instances for a department, you need to
be able to get in [INAUDIBLE],, start and stop them, and so on. So we group these
permissions together and make them an IM role. In this example
here, you can see that we're assigning
the instance admin role to a number of
users on project A, which means these users
can then use the role to manage, to run operations
on instances in this project. Now, GCP ships, out
of the box, over a 100 curated roles that are
being built for you. Hopefully, it will map to a
lot of your job functions. But if you need more, you
can also build custom roles to specifically tailor roles
that match your business needs. So another new capability
to IM that we're going to be launching in
the next few days-- this is a sneak preview--
is IM Conditions. IM Conditions allows you to
further restrict the access to IM grants by additional,
either client or resource, attributes. So for example, I can assign
Peter-Mark storage admin on a bucket, but
with the condition that he can only
access the objects that match a particular prefix. So you can grant
subpacket-level IM that way. You can also use conditions
to do context aware access. For example, you can
assign permission to access particularly
sensitive resources, but only when the client
traffic is coming from a trusted network, maybe a company's VPN. And later on in the
network section, we're also going
to talk about how to use context
aware with your VPC to set the boundary at your
organization level, as well. Another common use of conditions
is to set time-bound access policies. So you can grant
temporary access and have it auto-expire
when time is up. This is often used,
for example, if you're an admin, for the
break glass situation, or to time with your
team's on-call rotations. So IM policy focuses on
the I, on the identity. It differentiates access
based on who you are. It's sibling,
organization policy, allows the admin to set secure
configuration constraints for the organization. It differentiates
based on resources and applies to
all the identities that access a
particular resource. Organization constraints
can be set on the org level, and it will be inherited
into all of the projects and resources underneath. And it can also be either
augmented or overridden on the other organization level,
such as folders and projects. So here's some examples
of the organization policy that are available today. You can, for example,
limit the usage of VM nested virtualization
or external IP serial port. You can also use it to
control your IM policies. For example, preventing
external identities from being added to the access
of your company's resources, or prevent the creation of
service accounts or keys in a particular project. So let's look at one of them
in more details, the restricted VM trusted image policy. So if you use GCE,
you might already be aware of the GCE image
sharing feature, which allows you to curate your
images under one project, and share it with the other
VMs in the other projects. Now, once you've set up the
corresponding org policies, you can whitelist a
set of projects hosting the VMs that are
allowed to be used at a safe for your organization. For example, you can limit
it to only the VMs that are published by Google, or
only the projects containing VMs that have been fully tested
and blessed by a security team. And once you have configured
this policy, the next time your developers come and
try to [INAUDIBLE] VM, they will only be able to
choose from these safe images that you have
defined by a policy. OK. So that was a lot. Let's put them together and look
an end-to-end example of how to configure identities,
organization, and acess policies for
an enterprise company. So if you run an
enterprise company, you probably have
multiple levels of organizations that are
built up over the years. In this day and
age, you probably care about security and
privacy and separation of duty. And you likely also already
have existing on-prem assets and processes that
your cloud deployments need to be able to work with. So this is a typical setup. You might have your
on-prem [INAUDIBLE] system hosting identities,
and then Federate them into Google via Cloud Identity. And an organization,
though, would be set up to match your Cloud
Identity account, and would give your admins
central visibility and control of all your resources. Organization policies
can be set up to prevent unsafe configurations
for your organization, and protect compliance
and security requirements. And it is often that
procurement decisions are made at the org
level centrally, instead of every
developer deciding what to use on their own. And then, resource provisioning,
as well as major configuration changes, are driven
through existing processes, such as a ticketing
system or a CICD pipeline. Now, the benefit of doing
that is that your developers won't need to have direct
access to the actual resources, especially in a
production environment. Instead, they can request
that all the changes be driven through existing
automated workflows, and that everything will leave
a trace through your git-repo, your CICD pipeline. And then you can set up
your folders and projects to model your teams and
departments so that they can have very separate policies,
as well as different cost allocation. So if you have your HR
department with their apps, and your storefront
department with their apps, you can make sure that they can
each get very different access levels and compliance policies. And when it comes
to billing time, you can also figure out how
much each department has spent. So that was a quick
tour through setting up identities, organization, access
policies for your organization. Another important
step in your migration will be figuring out how to
configure network settings. So for that, let's welcome
on stage Jeff Allen. [APPLAUSE] JEFF ALLEN: Hi, I'm Jeff Allen. I'm a Solutions Architect
here at Google Cloud. And what we're going
to talk about is-- building on this conversation
of identity and organization-- some of the foundations and some
of the foundational decisions we need to make in the area
of network and security, focusing really on
three primary topics-- the typology, the
structure, and the layout of our network, as well
as network connectivity. As we design this network to
interact with other systems, how are we going to
create that connectivity? And in regards to
security, how are we going to do access
control and auditing and have visibility into what's
going on in those environments? So let's start this
typology discussion by looking at the Google
Cloud global network. Now, you may have seen
this depiction before, where we've got
all of our regions up here, along with
the points of presence, where you can ingress onto
the Google private network. And all the lines
connecting the dots are the miles of our
private fiber sub-sea cables that really create what
differentiates Google Cloud Platform, this global network
with very low latency, single-hop routing
around the globe. And it's this network
which provides the backbone for your global
VPCs on Google Cloud Platform. Now, if you're interested
in this map, by the way, I will call out that you can
get a copy of it, a poster, if you're interested. So if you go to this link,
g.co/poster, in the US, we'll mail you a free copy
of that wall-sized poster. And if you're not, you can
still download a high res image that you could use with your
own large format printer to print your own poster. So if you want to have one of
those hanging on your wall, check out this link. So it's within
that global network where we define our VPCs,
our Virtual Private Clouds. Those are the network
environments where we'll launch our resources. But the resources
themselves tend to exist in what
we call subnets. And the subnets aren't global,
they are regional resources. So we define subnets
in each region where we intend to maintain
some kind of a compute presence. The subnets span the zones
within those regions. But here's where we
have some decisions to make about the
topology of our network. How big do our
subnets need to be? How many subnets do we need? In what regions do we want
those subnets to exist? So we've got to consider our
own organization-specific requirements to make
those decisions. But there are some best
practices to keep in mind. Certainly, we want
to have subnets available in each of the regions
where we intend our teams to maintain compute presence. They'll need to be
sized appropriately, considering growth. But they're easily
extended, if you need to add additional
address space to a subnet. As we consider how to connect
these networks with our other networks-- a topic we'll
look at momentarily-- that relies, in general, on
having non-overlapping IP address ranges. So we want to work within
our own addressing scheme. Allocate some subnets
specific to GCP. So when you start connecting
this back on-prem, you have non-overlapping
address spaces to use. And also, consider
drawing parameters around the network
segments where you may want to control
which segments have access to the internet. Which segment segments
are internal only? Which segments maybe have
some special protection because of some
compliance framework that were audited against? And divide up our environment
into these segments. And we'll look, throughout
this presentation, at how we can use network
routes and firewalls to create those segments within
our environment. Another consideration
regarding network topology is the relationship
of our networks to our project structure. Rae just showed us
how these projects exist within a hierarchy of
folders in our organization. Now, the VPC is a resource
that's owned by a project, and you certainly can have
your VM instances and your VPC network in the same
project, if you choose. But in many cases, because
of this desire for separation of duties, that's
not the way we tend to see enterprise customers
deploying their networks. There's this feature
called Shared VPC, which allows you to define
what we call a host project. And that host project is the
one that owns the network. This may be owned by a
network security team. They define the network, the
subnets, the firewall rules, and then they share access to
what we call service projects. And these service
projects may be owned by the individual
application workload teams. They're allowed to launch
resources into the shared VPC, but they themselves
don't have access to configure the network itself. Now that we've got
an idea of the way we're going to lay
out our subnets, we want to start thinking
about how the resources located inside those subnets
are going to interact with other resources
in other networks. Now, in this scenario,
we're interested in having our resources in our VPC subnets
interacting with resources that may exist in a
different VPC, maybe within our same project
or a different project within our org, or
maybe even a VPC that's owned by one of our partners. It may not even be ours. But if both parties
agree to establish this peering
connection, then you can have seamless
routing over the RFC 1918 addresses between
these two VPCs, allowing resources in
one VPC to communicate with resources in the other. It's one way to establish
that kind of connectivity. But what if we
want the resources in our VPC to be able to
communicate with Google APIs? For instance, Google
Cloud Storage or BigQuery. We've got instances
and we want them to be able to interact
with those APIs. Well, if we provide an
external, public, routable IP address, and a route through
our internet gateway, certainly, those
instances could egress to the internet and interact
with the Google APIs. But in many cases, we don't want
instances egressing directly to the internet. So how can we provide
access to Google APIs without internet connectivity? Private Google access
provides us a way to do that. This is a subnet-level
setting, and when it's enabled, the instances in that subnet-- even without an
external IP address-- can egress through the
internet gateway, specifically the Google APIs, not
the internet at large. So this is a way of
providing connectivity between your instances
and Google APIs without the instances
actually having full internet connectivity. But in many cases, we
do want those instances to be able to communicate
with the internet. And one way of
accomplishing that without providing each
individual instance a publicly routable IP address is to route
their traffic through what we call a NAT gateway. Now, a NAT gateway
is just an instance that itself is configured with
an internet-routable address, bootstrapped to forward traffic. And then you can
figure that instance as the next top
address for the routing of internet-bound traffic
from within your VPC. You could do a single-NAT
gateway, but in many cases, concerns for resiliency
and greater bandwidth would have us deploy multiple
NAT gateways, like we see here. Where we might deploy three
NAT gateways, one in each zone within the region,
where our subnet exists. Each of these NAT gateways is
deployed in a managed instance group with health checks. So they'll be health checked,
and if those health checks start to fail, that
instance will be replaced. And additionally, when
we configure the routing through these NAT
gateways, we're going to configure them
all with an equal priority. And that's going to allow us
to leverage ECMP, or Equal-Cost Multi-Path routing, which
is going to distribute our outbound traffic across
the aggregate bandwidth of all of these NAT instances. So it becomes
horizontally scalable. If you need more bandwidth
to egress the internet, you just add
additional NAT gateways with the same priority. Now, what if we
want connectivity not to the internet, but to
resources that exist on-prem? So data center resources. We want to create connectivity
between our VPC and the data center. There's a number
of ways to do it. We've got a VPN solution. Layer 3 peering solutions. Note, there is no SLA for
those, so in many cases, we recommend layer 2
interconnect solutions. And there's this
decision tree, which can be useful to help us
evaluate which of these five options are appropriate for us. On the left-hand
side of the decision tree is where we're exploring
the peering options. In many cases, these
are used in cases where we want to establish
low-latency interactions for the G Suite APIs, for our
end users of G Suite, Google Docs, and Google Sheets. So for the purposes of
today's presentation, we're really going to focus
more on the right-hand side of this decision tree, which is
going to help us decide, well, is Cloud VPN appropriate? Or maybe a full 10-gigabit,
dedicated interconnect? Or if we don't need 10
gigabits of bandwidth, maybe just the fractional
bandwidth of a partner interconnect. But we'll also see momentarily
how you might use some of these together. You might start with
a cloud VPN and then move to an
interconnect-related solution. And I'll show you a common
pathway along that journey from Cloud VPN to interconnect. So what is cloud VPN? Cloud VPN is our IPSec
connectivity option, where you create a Cloud router
in GCP and a VPN gateway. And then you
configure any number of tunnels between
that VPN gateway and your on-prem routers. This allows you to have
RC 1918 connectivity. It's encrypted
because of the IPSec. And it also allows you
to provide resilience in your environment because you
can define multiple tunnels. If you have multiple
routers, you can define a tunnel to each. You can also use ECMP again
if you need a higher bandwidth solution. You can expect between 1.5
and 3 gigabits per tunnel. But if you need
additional bandwidth, you can configure multiple
tunnels with equal priorities in their routing configuration,
which will, again, cause a routing to
distribute the traffic across the aggregate
bandwidth of those tunnels. So the other option that
we're going to focus on here is dedicated interconnect. Now, this is a physical link
in a colo facility, where you have a router,
Google has a router, we cross-connect those routers. And you can get a
full 10-gigabit link between your environments. We've got dedicated interconnect
locations around the globe. And you can check the website. We're frequently adding
additional locations. But remember we said that
interconnect is, in many cases, preferred because there are
supported configurations with interconnect where
there's an availability SLA. So let's take a look at a couple
of those configuration options. So this would be the dedicated
interconnect configuration if you're looking for three
nines of availability. Notice, in green, we've got your
router in the colo facility. Blue, we've got
the Google router. Notice there are two
links, not a single link, because there's no SLA for
a single-link configuration. But we've got two
links configured there. And those two links have to
fit specific requirements. There are two links that go
into separate physical zones within the same metro. If you notice, those dedicated
interconnect locations were defined in a metro area. And we use those metro
areas for purposes of availability planning, with
maintenance cycles and things like that. So you would have two
links into the same metro, but into different physical
zones within that metro. And that configuration will give
us three nines of availability. If we want to extend
that beyond three nines, and we're interested in
four nines of availability, what we'd essentially do is
duplicate that configuration. Notice now what we've done is
introduce a secondary region. So we've got a
similar configuration, where we've got
multiple links into two different physical
zones in the same metro. But then we've doubled it in
another region, two more links, and to two different physical
zones within a different metro in a different region. And by configuring
things that way because of the additional
geo redundancy, this configuration is supported
with four nines of availability in the SLA. We also discussed that path
from VPN to interconnect, which is a common journey
many of our customers take. Because sometimes, when
you're just starting out with a proof of concept,
or early in the project, it's very easy to
stand up a VPN tunnel, get that private
connectivity initially going. But as that proceeds, as the
initiative gains momentum, as you're formalizing your
platform, in many cases, you want to follow
that up with something like a dedicated interconnect. Now, this is going to show us
a way where you can do that, and transition
seamlessly from a cloud VPN to a dedicated interconnect. What we've basically set
up here is a cloud VPN. It's pretty standard. We've got a couple
of tunnels set up. Notice we're using ECMP
routing, as we discussed. The only thing
that's really unusual here is that we set our
BGP med, or the priority, as we call it, to 100 instead
of its default value of 1,000. You'll see why that
matters when we bring the interconnect online. When we bring the
interconnect online, we bring it online with its
default priority of 1,000. So even as those routes start
to be advertised by GCP, the priority 100 routes
are going to be preferred. So your traffic is still going
to be flowing over the VPN. And then when you're ready
for the interconnect solution to go live, you just
reverse the priority values. Set the VPN priority to 1,000. Set the interconnect
priority to 100. And by doing that now,
the interconnect routes will be preferred. At that point, you
could potentially turn the VPN solution
off or, in many cases, we'll see customers leave it
there as a backup, a fail-safe. It still works. If something happened to
the interconnect route, the cloud VPN routes would then
be able to service the traffic. So we've explored topology. We've explored connectivity. We also want to explore
some of the security-related foundations in GCP. One of the common resources
that we're going to use are firewalls. Now, this slide is showing
us, on the left-hand side, a traditional enterprise model
where you might have a firewall appliance and you route
traffic through that appliance. In the cloud, we prefer
distributed systems. We like to avoid those
kinds of chokepoints and single points of failure. So it's interesting, in GCP,
that firewalls are implemented as a distributed system. There's not a
firewall appliance. You could run a virtual
appliance if you wanted that. But GCP's firewalls
are distributed enforced on the host, based
on rules that you define. And those rules
allow you to specify the source of the traffic, the
destination of the traffic, protocols and ports. Allow or deny that traffic. And then target that rule
to a specific instance, or set of instances. There's a couple
of ways to do that. You could do that,
all VMs in a network. You could do that
with target tags-- tags with little text values
that you tag onto the instance. You could say web server,
and use that web server to target a port 80 or 443 rule. You can also target
service accounts. Remember, Rae introduced
service accounts as the identity that we specify when we
launch that virtual machine. So that virtual machine runs
in the context of that service account identity. And we can actually
target firewall rules to that service
account identity, or to instances running in
that service account identity. And we'll see, on the
next slide, that's actually our preferred way
of targeting firewall rules, for the reason that
tags are mutable. You can change the
tags on an instance. And by changing the
tags, you're potentially changing the firewall
rules that are being enforced on that instance. And if we want to be really
sure that the firewall rules are defined appropriately and
being used appropriately in our environments,
if we target service accounts instead,
service accounts have control. In IAM, we can
specify which users are allowed to launch instances
in which specific service accounts. And service accounts can't be
changed on a running instance. In order to change
a service account, you have to stop the instance
first and then start it again. And for that reason, if we
target our firewall rules to service accounts,
we've got tighter controls over the configuration
of those rules, and who is allowed to,
say, launch a web server. Who is allowed to
launch a server you know according
to other rules. This slide is introducing
an alpha feature. You may or may not
have heard of it. There'll be some
discussion of it in some of the other
Breakout sessions here next. But this is Firewall Logging. This is a feature that you
can enable per firewall rule. And when it's enabled,
Firewall Logging will create a record
of every decision when a connection is
either allowed or denied because of that rule. Firewall Logging is going
to log that decision. And then you can
define filters that would select all or some of
those firewall logging records, and direct them to
Stackdriver logging, to Pub/Sub, GCS, BigQuery,
wherever you want those logs to be accumulated. So this is a great
way to have visibility into the security
decisions that are being made within
this environment that you've defined. Another logging feature
that provides visibility into our VPC network
environments is VPC Flow Logs. Now VPC Flow Logs
provide us a five-second, aggregated report for each-- what we call a 5-tuple flow. Now, 5-tuple is a
way that we uniquely identify a flow of traffic. The 5-tuple is the
source address and port, the destination address
and port, and the protocol. And that 5-tuple uniquely
identifies the flow of traffic. So for each 5-tuple
flow, every five seconds, you'll get a log
record that shows you the bytes transferred, the
packets, additional annotations for the VPCs and the geographic
information of that traffic. And again, you can configure
that using Stackdriver to filter some or
all of those records, and direct them to your
appropriate repository to keep those records available
for your analytics queries. One final feature that
we'll discuss-- again, this is a feature that's
in private beta now. And this is an interesting
way-- we talked before about the idea of perimeters. And as we begin to sort of
segment our environments and think about
security perimeters, how do we enforce
those perimeters? Let's imagine a scenario where
we have created a VPC network. We've got VM instances
running in that network, and we've allowed
them to egress, either via private Google
access or through NAT, or even with a public IP address. We've allowed them to egress and
interact with our Google APIs, right? So we've got VMs
running in the VPC, and they can interact
with Google Cloud Storage. They can write
data into buckets, read data from buckets. How do we know that the buckets
that those VMs are writing data to are actually buckets that
are owned by our company? What's preventing
data exfiltration? What's preventing a rogue
script from uploading data into some bucket that's actually
not owned by our company? That's what VPC Service
Controls are for. It allows us to define
a service perimeter. And that service
perimeter is essentially a grouping of
resources, including the resources running in our
VPC, the buckets, and other GCP resources. And it's only within
that service perimeter that resources are
allowed to interact. So this would, for
instance, make sure that instances in our
VPC can interact only with company-owned buckets. And in fact, instances
in other VPCs, or trafficked from
the internet at large, would not be able to
interact with those buckets because that traffic comes from
outside the service perimeter. But you can extend
that service perimeter. If you do want to extend
it to allow traffic in from the internet, for
web serving scenarios, that supported. Or if you want to extend it even
to your on-premises network, so your data center network
can become part of that service perimeter, that's
supported, as well. But this tool allows you
to then draw that boundary, create that collection
of resources, and harden that perimeter. And so this will be an
important new feature for our network security. So hopefully, that
intro has given us a feel for some of the design
decisions that we have to make, and some of the features
and tools that we have in GCP to help you create
your network foundations, create your security
foundations. Now I want to turn the floor
over to my colleague, Morgante, who's going to show
us how to automate some of these deployments. Thank you. [APPLAUSE] MORGANTE PELL: Awesome. Thanks, Jeff. So we spent a lot
of time talking today about how you can set
up an effective organizational hierarchy and set
up your network. But maintaining this over
time can be a lot of work if you don't have some
automation in place that allows you to enforce
the best practices across your organization. So every time you're
onboarding new users, onboarding new
projects into GCP, you can have that
done automatically, and have your codified
best practices. So putting your automation in
place gives a lot of benefits. If it's in source
control, you can actually collaborate on it you. You can have pull requests. You can have suggestions of
changes to policies or changes to your network topology,
all collaborative. Do a pull request
instead of having to only have conversations
or through manual actions. You can also ensure consistency. You can make sure that
every single project you're creating in GCP has the exact
same consistent settings-- the right subnets, the
right IM permissions, the right rules set up for
that to find exactly in code. You can also reduce
the manual effort. And so having a
single person go in and provision a new service
account or a new service project, you can just have
the code automatically do that in a very consistent
and automatable way. And then you can enforce
policies proactively. So if you're saying
any of our dev projects can only get access
to dev VPC, if that's coded into your modules and
into your infrastructure's code, you can be confident
that every time you're creating those new projects,
they're not accidentally getting access to
networks or resources they shouldn't have access to. So this is what one of the
pipelines might look like. You'd have a set of common
modules, or templates, for what a project looks
like in your organization, or what a network
or subnet looks like in your organization. And you can combine
them with configs specific to a particular
application use case. So you could have
a, for example, subnetwork that, every
time you were creating a subnetwork for
your dev testing, that has to have
certain names, and then you would have a
service account that's creating the firewall rules. And that can [INAUDIBLE]
with the config specific to the application. So you take the configuration
and the modules together, combine them into a plan,
and then take that plan, and deploy it-- assuming it looks good-- to a non-prod environment,
to a dev environment, where you can actually
see it deployed. See, oh, great, we've
got the right resources, we've got the right project. And then, once
it's looking good, you can actually deploy
that same exact code, that same exact
infrastructure's code, and promote it into your
production environment. So taking that code and put it-- the same way you have an
application release cycle, you can have an
infrastructure release cycle. So projects and policies
are some of the things that benefit most from this, right? So if you stamp up projects in
a very standardized, consistent way, you can minimize
the proliferation of sensitive owner roles. So instead of having to have
individuals going and creating projects, in which
case, they, by default, get owner on that
new project, you can have a service
account, which is creating the
projects from code, and makes sure that only
people who actually need access to the project get access to it. And by default, nothing
besides the service account has access to it. You can also ensure that
you have access to shared resources, like the shared VPC. Or if you have a common
standardized image bucket, where you have
blessed companies base images, you can make sure all new
projects, and the service accounts in those projects,
get access to that. And this can be automated. We have Deployment Manager,
Terraform, or our public APIs, and your own tooling
using this public APIs. So an example flow
for a new project might be create the project,
create a Google App Engine app in that project, if your
company is using App Engine. Enable billing on the
project, and associate with the appropriate
billing accounts. Enable any allowed APIs
and only the allowed APIs on that project. And then create specific service
accounts for that project. And then finally,
if you actually want to have developers, say,
be able to only view the logs in that
project, you could have the same product factory
process actually set the IM policies on the new project,
and give the right people access to it. So this is talking about
that methodology of how you have projects as code, right? Instead of projects only
going through the UI, you can have an
actual process where somebody wants a new project. You have a new team that they
want to get access to GCP. They're really excited to
start developing their code. What they would do is they
could go to central products repo, where all the projects
that you want in GCP are listed. They'd make up their
own branch, make some changes to add their
own project listed in there. And then make a pull
request that says, I want to add this project. And then you can
have some automated tooling that first looks at
that pull request and says, OK, good. It's on the right subnets, it
has the right configuration. It looks good from the
automated perspective. And then have a central
team actually go in and manually review
that pull request. Make sure that everything
looks good there, so that you have both the
automated and the manual sign off on it. And then once that
all looks good, you actually merge that
pull request, right? And then, once it's actually
merged into the master branch is when it actually gets created
from the service account. So no individuals are
creating projects. It's always going through
a pipeline through code. If you want to learn
more about that, we actually have a
section on Thursday that's actually
talking about how we apply those same
principles of policy as code to Kubernetes, and taking
those same principles of a pull request workflow for
changing your policies. So this works really well
for proactive defense, right? If you have policies that
are defined through code, and you're proactively
finding [? any ?] problems as you're creating
new infrastructure, that's really great, right? But there can be
drift over time. There might be some legacy
organization admins. There may be some
different services running. GCP sometimes even
changes certain things. For example, you're active
in Kubernetes Engine, that will create some service
accounts on your project. So you also want to
have reactive visibility and detection of any
policy violations. That's where you have an open
source tool called Forseti, which can actually
do that violation detection retrospectively. It works by, first,
building inventory. So it has an inventory of all
the resources, all the way down to the individual
VM or firewall rule level within your
entire organization. And it takes that inventory
and it can actually run policy analysis over it. So you can say, all right. Here's all the firewall
rules in my organization. I should have no firewall
rules that target IP ranges, because we decided the best
practice that firewall rules only target service accounts. So you could have that
as one of your policies, and Forseti could flag any
rule that violates that policy. And then in some cases-- in all cases where there's
a policy violation, would notify you. So notify administrators
with an email. But also, it could actually
even automatically enforce that. So bring it back into
a compliant state, into compliance with
your code policies, in a really effective way. So that works with GC
firewalls, and we're adding more over time. And there's also a Forseti
section on Thursday if you guys are interested
in checking that out, and diving really
deep into what Forseti can do to help you have
automated policy analysis. So we're talking a lot about
how you can automate all these things together, right? Obviously, Google supports this. We want you to be able to have
your infrastructure defined in code, which is why we have
Google Deployment Manager. It's our hosted native tool
for creating our infrastructure as code. It's a hosted service. So there is a UI for it, same
as any of our other projects. You can go into
the cloud console and find information about
the Deployment Manager, see all of your
existing deployments, and even see what the status
of those deployments is. So what resources
did the create, what configuration values were
passed into those deployments? And one thing that's really
cool about Deployment Manager is it's totally API-driven. It's not custom logic
for each individual API that it's integrated with. It's using the same public APIs
that you see in our API docs to actually call those services. The really big advantage
of that is when-- you're here at Next, right? You're hearing about people
coming out with new features for all of their different
products all the time. When nodes are added
to an existing API, the Deployment Manager
can immediately get support for
those new features because it's just calling
the underlying API. So if there's a new property
added to a particular API, it will immediately be able to
call that API and get access. So you're not being
time-boxed, and you're not having to wait
for the automation to catch up with
where you're at, which is a huge benefit to
being able to adopt automation across the board for all
of your infrastructure. And templates, of course-- back to when we first
started talking, we want to have these
all be templatized. You want to have things
in a very consistent way. You're not custom writing config
for every single deployment you want to do. You want to have reusable
templates across the company. Those can be written in Python. So really expressive,
full language. You've got all the
power of Python to create your templates
through Deployment Manager. But if you want something
a little bit simpler, there's also Jinja2
templates, which I can show some examples of here. Cool. So this is an example of
some of the deployment types that we have. So on the left here is if
you wanted to do that project creation hierarchy. We were talking about how you
want to stamp out products in a very consistent way. This is how you do it with a
Python template in Deployment Manager. So notice here, we've taken
a couple of parameters. We've taken a product ID, a
project name, and a folder ID. So we say we want to put
this into our dev folder, for example, and we
want to create that. We can take those
parameters in, and actually have Deployment Manager,
through this common template, do the correct association,
create that project in the correct folder. And actually, even
here-- it's at the end, it's a little bit cut off-- you can even do the billing
accounts association. So create that project
and then attach it to the correct billing account
as part of that template. Now, if you want to
go a little bit more advanced-- you're going really
deep into service accounts, and doing really
advanced custom rules, and you're pushing
the box and you want to have a very clear
definition of what a service account, what
permissions it is-- you can start doing IM custom roles. So if you want to
create a custom role, that can be done through the UI. But again, we want to think
about things as policy. We want these policies
to be in code, right? So if it's in code,
it's much easier to know what that
role is and change the permissions associated
with that role over time. So that's why, on
the right here, we have an example
of a Jinja template for creating a custom
role, where we've taken some properties
within the curly braces there, like a role ID, a
title for their role, some of the permissions that
are included in that role. All through the curly
braces there on the side. Now, what's really cool
about Deployment Manager templates is we want them
to be reusable, right? So you want to switch them
between multiple contexts. That's why, as you can see
here, there is the ENV part. It can actually pick
up the current project from the current context by
referencing the ENV part there. That allows you to take
that same template, run it against
different projects, and actually have the
those roles be created in the correct current project. So it allows a
lot of flexibility to actually have these
templates adapt over time. So we've gotten a lot
of examples of these. If you go on GitHub,
we've published dozens, and we're publishing
more every day, of samples of how you use
Deployment Manager to automate these things. Now Google, of course, wants to
make sure that we have a hosted and supported solution
for this, but we also embrace the open
source community. We know there are other
tools out there that can support all these tools,
which is why we even have teams at Google that
work on maintaining really strong support for
open source tooling in GCP. So Terraform is an
example of this, where we want to
make sure there's a really strong integration
between Terraform and GCP, so that over time, you can very
easily use Terraform to define all of your
infrastructure and have it be configured automatically. So this is an example of how
you could use a service account credentials file-- that's a JSON credentials file
that you download from GCP-- to authenticate Terraform
to access our APIs. And you configure the project
that it's accessing there. And then down below,
we see an example of how you create a project
in Terraform, right? So you would say this
is the project ID, this is the billing
account ID that I want to attach that project
to, an org ID, and then a name for that project. And that can be done very,
very easily in Terraform. And one of my personal things
that I really like to do is actually have bindings
declared in code. So you don't have people
going manually and saying, oh, I want project
admin, and somebody just does that randomly,
and nobody even knows if they granted that role. You can actually do it
all through Terraform code here, where you have a Google
project IM binding saying, the people defined
in this role-- in this case, Jane
at example.com, get the project editor
role on that project. And it can be done
totally in code. And one of the things that
this actually will do, with the project IM binding,
is wipe out any other policies. So if somebody went in and
manually gave themselves editor, but they're not
defined in the Code, the next time the code is run,
that binding will be removed. It makes sure that
only the people who are supposed to have
the role have the role, and that you always do it
through the code instead of doing it manually. So one of the things
Jeff talked about was that really,
our best practice for having secure
firewall rules is do them through service accounts. You don't want to have people
going in and editing network tags, and allowing their VM
to egress wherever it wants, ingress wherever it wants. You want to have some
security controls over who can do certain firewall rules. So this is an end-to-end example
of how you could do that. Up here in the
top-left, you'd actually just create a service account. You'd give it an
account ID, and you'd give it a display name
for that service account. And then down below,
you'd actually do a binding of who can
access that service account. Who can take the service
account user role, and is able to launch VMs
with that service account? So this is saying,
only our web app devs are allowed to launch service
accounts using the web app service account. So another team that is
not authorized to use that can't go and create
VMs because they don't have the
service account user role on that service account. So this gives us access control
of who can actually do it. And then finally, you'd
have the firewall role defined to allow
ingress on port 80 for that service account, which
gives you a lot of flexibility that you don't want anybody to
be able to open their service account-- their VM up to the internet. You only want people
who have that role, and have that binding
defined, to be able to actually allow internet
ingress into their service. So now, I just want to skip into
a demo, where we can actually show you guys how
some of this works. So here's my Terraform config. We're going to be using
Terraform today to show off some of the features. I just want to
show how I actually have my Terraform back-end
state stored in GCS. And then, here's how we actually
can define our whole resource hierarchy within Terraform. So we want to have
multiple folders. In this case, we're doing a
split between prod and dev. So we define our
prod folder up here. And we define a dev
folder down here. And then, we have
an existing folder. Maybe we have a web team. They've got all their dev
resources within that folder. They can apply policies
to that folder. Now, let's say we want to
add an additional folder just for our demo today. So we're doing some
demo at Next '18. So we could do demo. Demo dev 18. And we want to keep it
in the dev fold, right? So it's still a nested folder
underneath the dev folder. So this is all I need to
do to add a new folder. It's all defined in code. And theoretically, if this
was a production environment, I'd be doing pull
requests for these parts. So I have that folder created. And then I actually
want to create a project within that folder. So this is where I take
this standardized config that I have. So this is a module that
was built for Terraform that allows you to
create a project and attach it to a
bunch of resources and, actually, create a new one. So I'm going to do a new file,
copy this over, and customize it for my new project. So in this case, I want to
call it, let's say, demo. And I want to call it our
demo app, just for today. And then, we want to put
it in that new folder that we created, right? So that folder was Dev Demo. And then, you often
want to have groups. So one of our best
practices is to use groups for access management. And as part of
the standard tool, you can actually even
create the groups for you. If you didn't already
have the groups available, you still want to get
the initial editors for the project in a group. This can actually even
create that group for you. So we could say
demo devs, and we're going to give them the
editor role in the project. We've got some sort
of configuration here of what billing
account to use, what credentials file to use
for [? acquitting ?] everything. And then even an API
service account group. Now, my first project
only used Compute. Let's say I also want to
use Pub/Sub on this one. So I just add Pub/Sub in here
as an additional declaration. And I could have
exactly what APIs are going to be activated
on the new project defined right here in my code. And then, finally, shared VPC. We talked a lot
about how you want to make sure that service
accounts get access to the right shared VPC. So here, I specify which
product is my shared VPC host. This is a service
project we're creating, and we want to specify
what the host product is. But then, even down-- we can granularity down
to the subnet level. So we don't share all the
subnets from that host project. We just want to
share, let's say, our US central non-prod
network, our non-prod subnet. So this defines that exactly
that subnet is shared, and no other ones. Rae mentioned labels as one of
our really effective mechanisms for getting additional
granularity and info on how your projects work. So you can actually specify
some labels for this. So let's say it's demo app. And then, this is
all the config I need to do to actually
create that new product. And then I might
want to actually add an output, just to get back
the project ID when I'm done. So I could just do
demo, and demo here. Cool. So we've got on our new config. Everything is looking good. Presumably, this would
go through some sort of merge request process,
so I do some pull requests. And then, as part of
that pull request, somebody would run
a Terraform plan. So I do Terraform plan. So actually, first,
I do a [INAUDIBLE],, which will actually
take care of downloading the modules from GitHub. So you saw that this one was
actually hosted on GitHub. So it will go out, fetch the
modules, fetch the providers, and actually pull everything in. And then I can actually plan it. So this acquires a state
lock, checks everything that currently exists. Doing a bunch of checking. And then you can see, here's all
the changes that can be made. I'm going to create
a new project. Here is the new
product being created. And do a bunch of other
things in the process. I'm going to create
the new folder. So we have to create
the folder before we create the new project. We have to attach that new
project to our shared VPC. We have to grant the
API service account. Google always has an
API service account associated with
every project that is used for any time Google is
doing something on your behalf. So for example, a
manage instance group-- whenever [INAUDIBLE] the
manage instance group is putting up new VMs, we go
through the API service account that Google created. And we want to associate
that to the VPC subnets. So [INAUDIBLE] shared. We want to make sure the group
that has access to this project also has access to
use those subnets that you're sharing
to that project. So we do a role there. And then we do a service
account role to VPC subnets, so we make sure that the
service that's being created gets access to the subnets
that are being shared. Because there's a lot
of these different-- if you want to have those
very granular permissions, you have to make
sure that everything that needs to have
access to subnets does have that level of access. And then, as I mentioned, you
actually can even have it-- this automation can even create
the G Suite group for you. So it'll create
the G Suite group and give it the editor
role in that project. And then, it's going to create
a default service account, so you have one service
account to get started with. You can add more service
accounts into the configuration if you wanted to. And then do some
membership there. So I'm just going to
go ahead and run this. I just do Terraform apply. That's just running. Say yes. And it's going to go ahead
and create everything. So it figures out how to parse
things in the correct order. So it knows to actually
create the folder before it creates the new
project because, obviously, the folder has to exist
for the project to work. And it's going ahead
and spinning everything up over time. So it's creating the project,
doing the subnet association, doing everything there. One thing I do want to call
out while it's doing that, is something that's pretty
special about how this works. It actually will take
that API service account and put it into a group for you. Because by default, you can have
a proliferation of many service accounts within
your organization. Every project you create
has some service accounts, and has multiple service
accounts for different APIs. So there's a lot of
benefit to actually having groups, that group
those different service accounts together. So if you wanted to
say all my service accounts in the
organization should be able to pull GCE images
from a common project, you can just give them a
role at the group level. So the same way we
talked about users being roles, service accounts
can also be placed into groups. So that's one thing
this does, is actually takes that newly created service
account and all the service accounts on that
project, and puts them into a group for
easy administration. So it takes a while for this
to actually process everything. So I'm just going to
speed it up a little bit. My magic little speed-up. And at the end here, you can
see we actually have the output. So for our new GCP
project, you can actually see GCP Next '18,
simple demo 18. That's our new project
that was created. If I wanted to go over here
into the project hierarchy, I can actually go ahead
and find my folder. So Next '18 Dev, Demo 18,
and my app was created. And we see it actually even has
half of the shared networks. So it did that shared
[INAUDIBLE] attachment. Everything's ready to go. And this team is now
onboarded and ready to start using GCP in
this standard policy config, standard project. They have all the resources they
need to start getting forward. Cool. So that's the demo
we have today. I'll welcome Peter-Mark
up to close out. [APPLAUSE] PETER-MARK VERWOERD: So just
to reiterate where we started and where we began this talk,
I showed you this migration methodology that we
use with our customers when we're trying to migrate
large-scale environments in data centers and
so on, from assess, to foundation,
deploy, and optimize. And we've really zeroed
in on the foundation side. And that's because I think
it's so important for customers to actually spend the
time, spend the energy focusing on this part. Because really, the amount of
time that you invest upfront in those first two stages in
particular-- when you plan out, you find out everything
that you have, you find out what
you want to move-- and then you create
this foundation of what you need to move to
the cloud is so important. It really is such a metric for
success for the later stages of migration. We really feel it's
worthwhile spending time to focus in on this, and
find out what you need to do. And then for us to come
out here and do things like this, where we
talk about what we think are the best ways to
do things like this. There are a couple of sessions
that might be interesting if you'd like to see
other sessions where people talk about migrations
they've gone through. These four sessions are to come
in the next couple of days, and I highly recommend them. Unfortunately, a couple of
sessions have already passed, but all of our
sessions are going to be on YouTube in
a couple of weeks. These ones would
be worth looking up if you'd like to watch,
again, other people's stories about the
things they've done. And in particular, I think
the Velostrata session, and the Cloud Sprint session. Those are both
Google led sessions. Velostrata, we
recently acquired, they do VM migrations. And Cloud Sprint is our program
by our professional services organization that
comes into customers who are beginning a
migration journey, and helps them kick that off by
creating a well-secured, agile, best practices cloud foundation. And on that note,
thank you very much. All four of us will be
happy to take questions. [APPLAUSE] [MUSIC PLAYING]