STEPHANIE WONG: Hey, everyone. Welcome to CE Chat,
hosted by Cloud OnAir, which hosts live webinars every
Tuesday about Google Cloud. My name is Stephanie Wong,
cloud customer engineer. And today, I'm super
excited to introduce you all to Ryan Przybyl,
who is a networking specialist and a
customer engineer here at Google Cloud as well. So Ryan, did you want to
introduce yourself really quickly? RYAN PRZYBYL: Sure. As Stephanie said, I'm a
network SME for Google Cloud. So I spend most
of my time talking to customers about all
things cloud networking. STEPHANIE WONG: Awesome. So just to remind
all the live viewers, you can ask live Q&A throughout
the segment on the Cloud OnAir platform. And we will have Googlers
help answer them throughout, and we'll also have some time
at the end for Q&A as well. So without further
ado, let's get into it. I'm super excited today to
talk about networking, which is such an important
foundational concept to understand as people
begin to think about moving their workloads to the cloud,
especially when they're trying to map out their on-premise
network topology into something like a software-defined
network on Google Cloud. So Ryan, take it away. RYAN PRZYBYL: Sure. So networking is obviously
a very broad topic. It's not something I can
cover in the next 30 minutes. So I wanted to lay out
a quick roadmap of what you can expect today and then
follow on sessions that we have planned. So today, I'm going to talk
about the VPC construct or the virtual private cloud. I'm going to talk
about a concept that Google has done
called shared VPC. And I'm also going to talk
about once you build these VPCs, how do you actually connect to
them from your on-prem or data center locations. During the next section, we'll
cover routing or VPC peering. Next, we'll move into
firewalls and security. We'll also plan
to have a session on load balancers and all
the flavors that Google has and how to use them in
the cloud environment. And then we'll cover
other services. If there's something
network related that you want us to
cover, please let us know, and we'll figure
out how to integrate it into these topics. So with that,
let's jump into it. So if anybody has done
any work in the cloud, this probably looks
very similar to you. This is sort of what I
call a traditional VPC. So if you've used AWS, if you've
used other cloud providers, this is sort of how it's built. So what I'm showing
here is two VPCs that have been built with
two different subnets. So as you know, when you
build VPCs in this manner, the way that VMs in, say, US
West talk to VMs in US East is typically
through VPN gateway. There's no way to actually
have direct communication between, say, a web
front end that's sitting in US West and
maybe a web back end that's sitting in US East, other
than going through that VPN connection. So Google thought about
this topic and said, OK, how can we potentially
do this different? How can we simplify this? And we came up with
Google's version of the virtual private
cloud environment. So in Google's world,
the VPC, the container which in the subnets live, is
actually a global construct. So instead of having
to build a VPC in US West, a VPC in US
East, a VPC in, say, EMEA, you actually build
one VPC and then you actually put subnets
in the different regions within that VPC. Now, as you see
here, there's no VPN required to actually
have a VM in US West talk to a VM in US East. This is all handled by
Google, in terms of routing that we have under the hood. So it's nothing
that you have to do. Now, you can control this
through a lot of means through firewall rules and
various other things, which we'll talk about
in other sections. But really from a
conceptual standpoint, it really simplifies
the environment as you're deploying things. STEPHANIE WONG: So just to
highlight that again, you're saying that we don't need a VPN
for traffic between regions. How does Google do this? RYAN PRZYBYL: So we do this
through Google's underlying network. So this is the same
network that we use to run our search engine,
to run YouTube, to run Gmail. All that infrastructure
that Google has built for our
own use, we actually use that as part of cloud. So we use that same
network infrastructure to move traffic from, let's
say, VMs running on US West 1 to, say, a VM running in EMEA. STEPHANIE WONG: Wow,
pretty revolutionary. RYAN PRZYBYL: Yeah. So really why is this
concept important? So one, it simplifies
the network. So as I said, you have one
global, private networking space with a
regional segmentation to put your subnets in. I talked about the VPNs. So if you look at the
picture that I started with, it's very simple
when you're dealing with two VPCs in, say,
a couple projects. But imagine expanding that
model to hundreds of projects, all running hundreds of VPNs. Now, you've got mass amounts
of VPNs that you're building. You've got mass
amounts of VPN gateways that you're trying
to stitch together. When you do it with
Google's method, you don't need to use a
lot of that infrastructure. So it really simplifies
that network stuff, which your network engineers,
your networking engineering organization, will love
that simplification. The same thing
applies to routers. So what I didn't show there
is in each of those VPCs, you actually have routers to
dynamically advertise routes. So in the traditional
environment where you're building
a VPC in the west, you're going to put
one, two, maybe even more routers in
that environment. You're going to put some
more routers in the VPC in the east, more
routers in EMEA. You're going to use those
across the VPN tunnels to provide dynamic
advertisements for your subnets. So you can quickly see
you've got a lot of sprawl, in terms of the routers and the
BGP sessions you're managing. It tends to grow and
grow and grow on you. With Google's construct,
you don't need to do that. You can actually put a couple
cloud routers in place. So if you're trying to stitch
multiple VPCs together, you actually can just use
a couple cloud routers to do it, so have these very
large, global routing domains and not have router sprawl where
you have hundreds of routers. So it really simplifies just
that network infrastructure component of things. Second, it simplifies
management. So you can think about, if you
have all these VPN gateways, if you have all these VPN
tunnels, what's happening is when something
breaks, you have to go and figure out what is
broken, where is it broken? You're looking at potentially
different cloud routers. You're looking at
different VPN tunnels. You're looking at
different VPN gateways or looking at the
A side, the Z side because you have all this
stuff stitched together in this very complex mesh. With the Google environment,
it's much simpler. So when something
isn't functioning, there's only a few
things to really look at and sort of dissect what is
working and what isn't working. So it really simplifies
that operational management. The other part is
the security policy. So if you ever spun
up VPCs, you know that the security
policy typically is constrained to that VPC. So again, if you have a VPC in
the west, a VPC in the east, a VPC in EMEA, you're managing
separate security policies for all those VPCs. So if you're a
security administrator and your goal is to sort
of unify your policies, have a very uniform policy
applied across many things with maybe slight
tweaks in each of them, it becomes more difficult
to manage that uniformity because you're having
to copy and manage 12, 15, 20 of these things. In the Google environment,
because that VPC container is global, you can really
manage one security policy. So it simplifies the
management of that as well. The other part is flexibility. So you can build exactly what
I showed in that first slide. You could build a regional
VPC construct and just drop a subnet in one region,
and build a second VPC and drop a subnet
in another region and actually connect
those with VPNs, just like you would in a
very traditional model. Or you can use the
Google model, where you have this global construct
and dropping different subnets in different regions
within that VPC. And you could
actually combine them. You could have some that
you're using that global VPC presence, where you
have a lot of subnets. And then you could also
be using specific VPCs for specific applications
that are really constrained to just one region. So Google is really giving
you that flexibility. Every business is different. Every business is unique. So what your needs may
be may be different from your competitors'
needs or another business down the street. So we've given you
that flexibility to really look at how you
want to use cloud networking and to be able to apply
that in Google's cloud in a lot of different fashions. STEPHANIE WONG: Cool. With all that said, you did
mention just the use case for having individual
subnets in each region. So is that sort of a
use case for not using that global VPC construct? RYAN PRZYBYL: So
here's a good use case for not using the global VPC. I was working with a
customer a few weeks ago that had very specific requirements
around the applications they run have to only live
in that particular region. It's not necessarily
through something like GDPR or other things like that. But this is actual
contractual obligations they make with their customers. So while we presented the idea
that they could, in theory, run a global VPC and use
firewall rules to constrain what can talk to what, they
really didn't like that idea, and they really wanted to
create these constrained network domains. So what we ended
up with is actually building a VPC in EMEA, only
putting subnets in there. And then for other
customers in APAC, we built a completely
different VPC and only put subnets in
APAC, so just gave them that very traditional
way of doing things. STEPHANIE WONG: Yeah. But you are sort of losing that
benefit of the simplification of management. RYAN PRZYBYL: Yeah. What you'll find in
networking in the cloud, in general, is you're always
making constant trade offs. So that's an example of, OK,
for their particular business, they had requirements they had
to meet for their customers, in terms of contractual
obligations. So they wanted to really
manage that expectation and be able to put that in
front of their customers, but they did lose
some flexibility here. Now, one of the
things they did do is actually use the
global construct for a lot of common
infrastructure. So they could deploy
the application in a VPC that just lived in a
region, but there's a lot of core infrastructure
that they actually used a global VPC. And they tied each of
those regional constructs back to that common
core infrastructure VPC. So again, there's
that flexibility. It's not an and/or situation. You can use all of these
constructs together. STEPHANIE WONG: Yeah,
not mutually exclusive. RYAN PRZYBYL: So these things
are really simple to set up. I want to take you
through a demo real quick, just to show you
exactly how easy these things are to set up. OK. So for those of you who have
used the Google console before, this is the sort home page. So what I'm going to do is
I've created a project called Cloud OnAir Networking 101. So this is the
project that I'm in. This is the home page. So what I'm going
to do is I'm going to go down to VPC network,
click on VPC networks. Right now, there's
nothing in here. I've actually deleted
the default network. What happens when you
create a project is you're actually going to get
a default VPC that gets created with a bunch
of default networks. One of the best practices
is to get rid of that, just wipe it out. In most cases, you're trying
to integrate a VPC in the cloud with some on-prem environment. So you have a whole IP address
scheme that you've got set up. The defaults generally
are not designed to integrate with what
you probably already have. So there's probably some
overlapping space in there. So generally, I tell people
the best practice is just get rid of the default VPC. So I'm going to go
in and create a VPC. So I'm going to call
this demo network. You have custom and
automatic subnets. Automatic is what I
was describing before, where it provisions a whole
bunch of 10-dot space for you, typically not that used. Typically, we're going to come
in here and use custom subnet. So I'm going to
create network one. Here's where you
select the region. So this is where that global
construct comes into play. I didn't create this
VPC just in one region. Now, I'm going to
drop just the subnet into one of these regions. So let's put this
one in US Central 1. Let's pick an address block. OK, there's a couple of
other options in here. So private Google
access, this is what enables your VMs to
actually talk to Google APIs without having to have a
public IP address on them. So a lot of customers don't
want any public IP addresses on their VMs, but they still
want to access Google services. So this could be BigQuery. This could be Google
Cloud Storage buckets. This could be all the services
that Google has to offer. So if you don't
have this turned on, you actually have to
have a public IP address because most of those
services, as you're probably familiar with, rely
on public APIs. So a best practice
is to turn this on. STEPHANIE WONG: If you were
not to use that option, do you have to have a public
IP for the VMs that you deploy? RYAN PRZYBYL: Yes. If you were to not
use that option, you'd have to have a
public IP address on there to say, access your storage
bucket or access BigQuery, access other things. STEPHANIE WONG: OK. And for the /24 address range,
is it possible for you to use overlapping IP ranges if you
were to create two subnets in one VPC? RYAN PRZYBYL: So you can't have
overlapping IPs in one VPC, but let's say I created
two separate VPCs. I could actually create
exactly duplicate of VPCs. I could have the same addresses
in VPC A as I did VPC B. Now, that could cause
problems for you as you go to
interconnect these VPCs. So if VPC A needs to talk to
VPC B and has overlapping space, now you've got a problem. You can't do that
because it's not going to know where you
want to route this stuff. So generally
speaking, you're not going to have
overlapping IP space. And the same thing applies
to your on-prem environments or your data centers. You're going to sort of
use separate IP space. Typically, 10-dot
space is what we see in the cloud environment. STEPHANIE WONG: Some of
those same best practices apply here too. RYAN PRZYBYL: Yeah. OK, so I'm going to
click done on that one. And I'm going to
add another subnet. That's about right. So I'm going to
create another one. This one, I'm going
to drop in US West. And again, I'm going to
turn on private access. So I'm going to
turn on flow logs. Flow logs are another
sort of best practice. Flow logs are
actually what enable logging of the actual
flows from your VMs. So again, if you want to
capture those flow logs, you have to enable flow
logging in this environment. But again, it's a best
practice to turn it on, to capture all those flows. They get pushed
in the Stackdriver and that you can use
them for troubleshooting. You can use them for
a lot of other things, if you're troubleshooting
security, things like that. I'll talk about this a
little bit next time, but I want to talk about
regional versus global routing. So in this environment, you can
actually set up your routers to actually just advertise
the routes, so the subnet that I've created here, in the
region that the router lives. So for example, I created
one subnet in US Central 1. If I put a router in US Central
1 and I had regional checked, it would only advertise the
subnets in US Central 1. Generally speaking, what I
see is people using mobile. What that really means is
I could put a cloud router anywhere in this environment. It's going to advertise all
the subnets in this environment out to wherever
you're advertising. So it could be your
on-prem, your data center, things like that. So again, it's another-- I don't want to
say best practice. There are use cases to
use regional routing. But generally speaking,
most customers are using global routing. So now, I click create and
it's going to go and create my subnets for me. STEPHANIE WONG: Are
there any limitations on the number of
VPCs that you can have in a project, or the number
of VMs in a VPC or in a subnet? RYAN PRZYBYL: So let's
take those one at a time. So a number of
VPCs in a project, so we have the concept
of quotas and we have the concept of limits. So let me touch on
those real quick. Quotas are generally something
that can be increased. Limits are something that
are generally hard, fixed. So we have a quota for VPCs. We can put five
VPCs per project. That's a quota. So you can go and request
an increase to that quota to be able to put more
VPCs in that project. You're asking about
number of VMs in a subnet. Every VM is going to have an
IP address in that subnet. So in this case, I
created /24 subnets. So you're going to be
limited to one IP per VM, is going to dictate
how many VMs you can put in this particular subnet. If I were to make
this a /20 subnet, I could obviously put
more VMs into that subnet. STEPHANIE WONG: OK, makes sense. RYAN PRZYBYL: All right? STEPHANIE WONG: Yeah. RYAN PRZYBYL: So there we go. While we were talking, it
created my two subnets for me. So literally, it's that simple. You just go in there
and populate whatever subnets you want in this VPC. You drop in to whatever
regions you want them in and it autocreates everything. And now, you can go in
and create Compute Engine environments, create whatever
you need using this IP space. So let me go back to
presentation here. OK. So I want to expand on
the concept of the VPC. So this is something else
that Google's pioneered. And we call this the shared VPC. So as I talked about before,
you build this global VPC construct to really
simplify your network, just simplify your
operational management. Shared VPC allows you to
take that simplification even further. So what I'm showing
here is this blue box is actually the VPC
that was created. So there's various subnets
within this blue box. I could have more blue
boxes in this gray box. This gray box is the project. So when do used
shared VPC, projects become two types of projects. You either have host
projects-- and this is where your networking
resources are actually going to live. So I say networking
resources, I'm talking about your
cloud routers, your VPN gateways, your subnets. Things like that are
going to live in the VPCs in that host project. On the right over here, I have
got a few example projects. So I've got a
recommendation project. I've got a
personalization project. I've got an analytics project. These are what we
call service projects. So if you think of
the traditional model, I would spin these
up as a project. I would put VPCs in there. So I would have VPCs in
the recommendation project. I'd have VPCs in the
personalization project. I'd have VPCs in the
analytics project. I'd have VPN gateways. I'd have all the VPN tunnels. I'd have all the cloud routers. I'd have all this stuff. So this is sort of that
sprawl I was talking about. So the global construct allowed
you to eliminate some of that. The shared VPC is
going to allow you to eliminate even more of that
or simplify your network even further. So in this example, I've
got all of these subnets that I've created
again in one VPC. Now, what I'm doing is I'm
not actually creating any VPCs in these projects. So the recommendation project,
the personalization project, those actually
don't have any VPCs that are living within the
actual project themselves. What I'm doing is I'm
actually sharing the network infrastructure from the
blue box, that shared VPC, into those projects. So the users of those projects,
they're not creating a network. They're not using a network
that's central to that project. They're actually using a
network from the shared project. So the way you can think
about it is your users, so a development
team, the people who are spinning up VMs
and doing operations in these projects,
they're touching the VMs in the small gray
boxes, like the recommendation project. Maybe this is your
production environment. Typically, the people that
are operating in the blue box are your network
engineering staff, your network security team. They're managing all of
that network infrastructure and those are the only people
that are really operating within that environment. So it's a very clean separation. You don't have a bunch
of hands in there that potentially could break
things or mess things up for you. It's just network engineers,
just your security teams. And they're pushing
that infrastructure out so other people can
actually use it. STEPHANIE WONG: So
from an IM perspective, this is giving you that
segregation of duty. RYAN PRZYBYL: Correct. When you use IM, you're going
to have network administrators. You're going have network users. You're going to
have all that stuff. So most of your network or
infrastructure administrators are going to be operating
in that blue box. Whereas your users, and the guys
that are consuming the network infrastructure by
building VMs, by building other sort of network
things, they're going to be in the
recommendation project. So they're not actually
going to have the ability to do anything but, say, drop a
VM into that particular subnet. They can't change the subnets. They can't create more subnets. They can't do any of that stuff. They can only create
VMs within the subnet that you've given to them. STEPHANIE WONG: Alongside
that idea of least privilege, best practice. RYAN PRZYBYL: Yep. And then the right side
of this drawing, I'm also showing-- so you've
got all those VMs that have been created of that
shared infrastructure. They're accessing various APIs. So I'm showing a
machine learning API, analytic sort of BigQuery API. So those VMs are
still able to do that. Because remember, we set up
that Google private access in the blue box VPC
when we set up that VPC. So all that stuff carries down
to those service projects. Now, the other thing that's
really nice about this is we talked about your
security domain. And we talked about
your unified policy. So in this case, I'm writing
one policy in that blue box. And when I extend all of those
subnets down to those service projects, the security policy
actually goes with them. So now, you've got--
instead of having to build all these
individual VPCs across all these different
projects, you've been able to centralize your VPCs. You've been able to centralize
your security policy. And now, you're able to scale
to hundreds of projects, take that infrastructure, push
it down to those projects, take that security policy,
apply to those projects. And your security
policy, you may have specific rules that apply
to just the recommendation project or just the
personalization project or just the analytics project,
but you still only have one policy that's
written in that blue box. And you're applying
certain parts of that policy in maybe
the analytics project or the personalization project
or the recommendation project. But again, when you think
about unifying everything, only having to
look at one policy, it really simplifies that. STEPHANIE WONG: Yeah, especially
from a security perspective, management perspective, and
just adding to that scalability. RYAN PRZYBYL: Yep. So really, it sort of
continues on that theme. And if you take anything
away from today's talk, it's really about how do
I simplify my network? How do I simplify the management
in the operational day-to-day functions? And how can I sort of use
a lot of this flexibility that Google gives you? Whether you choose a
very traditional manner or you decide to use
some of the functionality that Google has really created
to really allow you to simplify this stuff, that
flexibility is sort of key and is something
that we're really happy that we can provide. So again, simplifying
the network, you're now able to deploy
that VPC across many projects. So you could have
a single project with a shared VPC, another
single project with a shared VPC, but this allows you to
have just one project, push that shared VPC to hundreds of them. As we talked about before, fewer
networking elements because I'm building the routers. I'm building those subnets
in the host project itself and in those VPCs. Again, I'm not having
sort of that continued sprawl of things. Again, I talked about this,
simplified management. You have one security
policy that you're managing and you can manage this
across hundreds of projects. STEPHANIE WONG: Is
there any upper limit on the number of
service projects that you can have under
one shared host project? RYAN PRZYBYL: So there's no
hard limit that we've seen. So we've tested up to
5,000 and it runs fine. The quota is 100. So you can do 100 with
nothing, no issues, not having to come to Google
and request anything special. But if you want to go
beyond 100 projects, you have to put in
a quota increase. But again, in terms of limits,
we've tested up to 5,000 and had no issues. So yeah, we don't
really necessarily know how big this thing
could potentially scale. STEPHANIE WONG: All right. Well, if you have
5,000 projects, then you're in good hands. RYAN PRZYBYL: So
like I did before, let me quickly show you how easy
it is to set up just the shared VPC aspect of things. OK. So I'm back into my project
where I built my two subnets. I'm going to go into,
under VPC network, I'm actually going to
go down to shared VPC. So this is the menu
when you're actually setting up a shared VPC and
what this piece looks like. So on this first
part, I'm going to get to specify because
I'm in this project, this Cloud OnAir
Networking 101 project. I'm going to say, yes, I want
to use this as a host project. Here, I can check where
I want to only show shared subnets or all subnets. And over here, I'm actually
going to attach projects to it. So I'm going to go ahead
and attach a project. So I've created a second
project, this Cloud OnAir 101 shared VPC. So this is the recipient. So this is the service
project that I'm creating now. I defined the host project
in that first half, now I'm defining
the service project. So I'll go ahead
and click on that. Here, you have the option
to either share all subnets. So I created two subnets in
there, one in the US Central, one in US West. I could, just by default,
share all subnets. Most customers that
we work with are going to share individual subnets. So maybe their service projects
are a production project, a development project,
a testing project, and they had different
subnets carved out. So they don't want to share
all subnets with production and all subnets with testing
and all subnets with dev. So what they're
going to do is click I want to share
individual subnets. So let's say I was using this
as a production environment and I created my two
subnets, but I only want to share one
of the subnets. So I'm just going
to click on that. Save it. STEPHANIE WONG: You really
get that delineation there, if you don't want to
apply all subnets. RYAN PRZYBYL: Yeah. You can sort of really get
that granular level you want. Again, if you want
simplification, you can say, I'm just going to
apply all subnets. If you want sort of to
granularly specify things, you can do that too. So again, it goes back to
that sort of flexibility. The way we've engineered
this is to really be flexible because
as I said earlier, every business is different. Every sort of
networking construct that we build with
different customers look slightly different because they
all have different use cases. OK. So now, I can go into
my service project. So if I go into
my service project and I click on VPC
networks, you can see there's actually no
VPC networks I've created within the service project. So I haven't done what I did
in that very first step, where I've actually built a VPC. I didn't do any of that. All I did was really go in
and share a subnet with this. So now, you're going to
see these two options. So now, you're going
to see networks that are shared with this. So here, you can
see the two networks that I've specified here. Both the networks
are showing up, but I've actually only
shared one of them. So if you go back to
the other environment and I go into the
shared VPC menu-- So here, it says network
one, three users. So three users are all
my user identities. Even though network
one and network two were both showing up
under that other VPC, I've actually shared
one because I can only see three users there. So if you look at
this, so I actually drill down into the
actual shared service, the service project. Again, you'll see
both networks, but you see there are zero
users in there because I haven't actually
shared that network. So no users can actually
provision any resources in there. The only network or the
only subnet they can provision resources
in is this one. STEPHANIE WONG: Got it. RYAN PRZYBYL: Again,
I could've created 20 subnets, shared 15 of them. I could've created 20
and shared all of them. Yeah, it sort of depends on
what your specific needs are and how you're architecting the
VPC components of the network. STEPHANIE WONG: So
the number of users you see here is the number
that are in the service project that you share that subnet with? RYAN PRZYBYL: Correct. So that service project, I have
three users in that project. They happen to be all
be my user identities, but you could have a whole
ton of users in your project. Your whole development
team could be there. So when you share that
resource, it would say, maybe, you're sharing
this with 250 users that are authorized to create
resources in that project. STEPHANIE WONG: Got it. RYAN PRZYBYL: So again,
very simple to set up. Doesn't take a lot of time. It's really designed to
be very easy and simple. OK. So now that we've talked about
the VPC construct in itself and you sort of
built this construct, whether you've built it in
a bunch of different VPCs or you've used sort of
Google's global VPC construct or using a shared VPC construct. This is stuff that
you've all built in Google Cloud's environment. So you still need
to connect to it. So this is connecting from
your on-prem environment or your data center or
something like that. So this is usually the
next logical step of, well, how do I get connectivity to
all the stuff that I've built? So the easiest way
to think about this is sort of in this
cruciform pattern. So on the left
side, you have sort of layers of the OSI model. So I have sort of layer 2
options versus layer 3 options. And then across the top,
I have dedicated options, meaning I'm directly
connecting with Google's edge. Or I've got shared
options, which means I'm connecting to
typically somebody else's network, which is
then multiplexing a bunch of customers
together and then pushing all those customers
at once up to Google. So there's some other
provider in the middle in that sort of environment. And then you've
got VPN, which I've sort of put over those top two. So you can't use VPN with
layer 2 networking options, but you're typically
going to use VPN over the top of
those layer 3 options because most people,
as I said earlier, are building private
address space in there. So let's start with
those top options because that's typically the
way customers start interacting with Google Cloud. So there's two ways
that they can sort of do a layer 3 interconnect. When I say layer 3
interconnect, I'm basically meaning they're
connecting with our edge at layer 3. So you're getting, in
the case of dedicated, in the upper left-hand
corner, you're actually connecting to a peering router
on Google's infrastructure and you're getting all of
Google's net blocks advertised to you. If you go through
the shared route, this is basically
using your ISP. Google's connected to most
ISPs globally in the world. We're providing all of Google's
net blocks to those ISPs. Those ISPs are then
advertising them down to you. Now, the big thing
to note here is there is no SLA around
either of these products. So the layer 3
interconnects, no SLA. So if you're planning
on using this sort of connectivity option,
make sure you have some sort of redundancy built into it. Connecting into one peering
router on Google directly probably is not the
best architecture, if you're concerned
about high availability. So as I said,
typically customers are building 10-dot space
in their cloud environments. So that's private address space. You can't access that directly
from a public routed edge. So the way that
you do that is you build a VPN over the top of it. So whether you're using your
ISP to connect to Google or you're actually connecting
directly to our layer 3 edge, you're going to typically
build a VPN over the top. So this is where you're
going to go and build VPN gateways within the
VPCs that you set up. You're going to build tunnels
to VPN gateways that you have on-prem or in your data center. And that's how you're
going to tunnel across the public infrastructure
all those private routes. I'll talk a little bit more
next time on the routing and how this stuff can be
built. But suffice to say, you're going to want
to set up some sort of dynamic routing setup where
you're advertising subnets across multiple VPNs or
multiple interconnect methods to give you a high
availability connection. Because, again, there's no SLA
with these actual services. Now, the one benefit to the
layer 3 service is right now, it's free of charge. You can interconnect to Google's
layer 3 edge for no cost. I should note that if you are
doing any dedicated connections with us, whether it's
layer 3 or layer 2, everything is done
at the 10 gig layer. So if you only
need, say, 500 megs, you need a gig or
something like that, typically we're going to
push over to the shared side. So it's typically going
to be through some partner interconnect or through your
ISP or something like that. So let's move down the stack
to the layer 2 options. These tend to be, I would say,
more preferential these days. This is really about building. So this is really about
building a layer 2 connection. So these are VLANs
that are being built, connect your environment
to Google's environment. So with the dedicated
interconnect, you're basically connecting--
instead of a peering router, you're actually connecting
to a peering fabric. So this is our layer 2 device,
just like our peering router is our layer 3 device that sits
on the edge of our network. So you're going to build a
physical interconnect to that. And then from that, you going to
provision VLANs from that edge device to cloud routers that
you build in the VPC construct. The same thing happens
on the partner side. In this case, you're going
to connect to a partner. Google has a whole list
of partners out there. A good example, a
very popular one that we work a
lot with customers is Equinix Cloud Exchange. Lots of people have
presence in Equinix. In this case, you're actually
connecting to Equinix's fabric. Equinix's fabric is
then connecting directly to our peering fabrics. But you're still really going
through the same process, where you still have to
get VLANs provisioned, still have to get that
layer 2 connectivity. But again, when you think you
and a whole bunch of customers are connecting to
Equinix's peering fabric, Equinix is then doing
VLAN segregation and sending a whole ton of
VLANs to our peering fabric. So they're multiplexing
a bunch of customers on that connection. STEPHANIE WONG: When is
it necessary for somebody to select partner
interconnect versus dedicated? RYAN PRZYBYL: So there's a
couple of use cases here. Normally, there's
two real things. If you think you're going to
use full 10 gig of connectivity, it makes more sense to go with
a dedicated method versus using a shared methodology. Because shared
methodology, you're competing with other
customers for bandwidth. You can't control who is
actually using up bandwidth at any given time. So if you and I are both
connected through Equinix and you're sending a massive
file the same time I'm trying to send a
massive file, we're basically competing
for bandwidth. Now, we try to engineer
with all of our partners to prevent that from happening. We do a lot of capacity
planning to say, we should never be
above 50% utilization on these interconnects. So the chances of that
happening are slim, but there's also a
cost component too. It tends to be more
cost effective to go on the dedicated side. If you're going to
use that full 10 gig port, then going
to the partner side. So there's a couple
of trade-offs there. But typically, if you want
something less than full 10 gigs or there's some
breakpoint, it's going to depend on per partner
on where the costs break even is. So where the layer 3 dedicated
interconnects were free, the layer 2 dedicated
interconnects, there's a charge for those. STEPHANIE WONG: Got it. RYAN PRZYBYL: So that's
$1,700 a month per port. So if you're looking
at a partner version, Equinix is going to
charge you something. Google is going to
charge you something. At some point, it
makes sense just to go to a dedicated
interconnect. STEPHANIE WONG: Yeah. Are there any industry best
practices that you've heard of? RYAN PRZYBYL: So what I
see with a lot of customers is that they start
by turning up VPNs and then they'll move to
dedicated interconnect or they'll move to
partner interconnect. Now, the great thing about this
is just like the VPC construct, they're not mutually exclusive. And a best practice that
I definitely recommend is actually using
the VPNs in addition to the dedicated interconnect. So you can start by using VPN. You can actually
seamlessly hop over to a dedicated interconnect
or a partner interconnect. And your traffic will
actually just flip over. STEPHANIE WONG: Oh, nice. Yeah. RYAN PRZYBYL: And then you
can leave the VPN as a backup. So if something
were to ever happen with your dedicated interconnect
or your partner interconnect, the traffic will
automatically failover to VPN. Since you've already
turned it up anyways and it's up and running, why
not just leave it up in place? So the later two things
actually do have an SLA. So the partner
interconnect, the SLA is actually going be dependent
on the partner that you choose. From a dedicated interconnect,
depending on the architecture you use is going to dictate
the SLA that you get. So if you connect to two
peering fabrics in one metro, we'll give you 99.9% uptime SLA. If you connect to two separate
peering fabrics in two separate metros, so you have
four connections at that point, the SLA goes up to 99.99%. So again, depending
on the architecture you choose, in terms of
interconnect to Google, there's going to
be an SLA behind it for both the dedicated
interconnect and the partner interconnect service. STEPHANIE WONG:
Sounds like it could be appropriate for
hybrid customers as well. RYAN PRZYBYL: Yeah. There's a lot of ways
you can mix and match things in this environment. Where this becomes
more challenging is if you have a
hybrid environment, you're running
some stuff in, say, AWS and some stuff in Google. I can't necessarily connect
a dedicated interconnect from Google to a AWS
version of this product. You actually have to have
something in the middle. So a lot of customers
will deploy a cabinet with a router or something
like that in Equinix and then run the
connections into that router and sort of hairpin
the traffic themselves. STEPHANIE WONG: Got it. RYAN PRZYBYL: But that
actually gives them a cost advantage versus
if they were just trying to move traffic
from AWS to Google over the public interconnect
that Google has with AWS. That's going to actually
cost you a lot more money. So if you're moving a lot
of data back and forth, it's definitely worth looking at
the architecture, where you're bringing our dedicated
interconnect service to AWS's equivalent service and
landing on a router somewhere. So that's a quick overview
of sort of all the ways that you can interconnect and
how these things can be used in conjunction with each other. So with that, that's
sort of a wrap for today. We covered a bunch of sort
of the foundational stuff. STEPHANIE WONG: Thank
you so much, Ryan. That was super helpful. Everyone, we're going to be
back in less than a minute for live Q&A. All right, let's get started
with the first question. So the first one is, can
I have a shared VPC that spans across multiple regions? And if so, how is
that traffic build? RYAN PRZYBYL: OK. So remember the VPC
container automatically is going to span across
multiple regions. So the first part
of that question is yes, you can definitely have
a shared VPC because a shared VPC is just taking
that VPC construct that exists across all
regions and applying it in the model where it's a host. It's in a host project and I'm
sharing it to other things. So absolutely, you can do that. If so, how is traffic build? Traffic is build at
a per project level. So when I'm sharing those
resources to another project and that project is actually
using those resources, it's actually going to get
build from that project that I'm sharing it to,
where the VM's running. Because the VMs aren't actually
running in the host project. They're running in all
those service projects and the VMs are what is
creating all of the traffic. So everything's going to get
build within those service projects, not necessarily
the host project in itself. STEPHANIE WONG: OK, great. In one of your diagrams,
you show two VMs and a VPC with different subnets
in two different regions being able to talk
to each other. Does this mean that
a single VPC is synonymous with a
single broadcast domain? RYAN PRZYBYL: So I
wouldn't necessarily think of it in terms
of a broadcast domain, because a broadcast domain is
really a layer 2 construct. So typically, when you think
about a layer 2 broadcast domain, what's separating
broadcast domains is layer 3 routers. In this case, it's not really
a layer 2 broadcast domain. You can almost think
of it as one big, flat, routed environment. I'll talk about routing
a little bit more in the next Cloud
OnAir that we do. But really, you can think
about every host is actually a [INAUDIBLE] router because we
program routes at the VM level. So it's almost like
this /32 routing domain. There is no layer 2 broadcast
domain in this environment. So hopefully that
answers that question. STEPHANIE WONG: Can I
enforce security policies or other
network-related policies on communication between
components in the same VPC? RYAN PRZYBYL: [INAUDIBLE]. So yes, you can. So within a VPC, you're going
to apply firewall rules. So for example, I could
have a whole bunch of subnets in there. By default, we're actually
going to lock everything down. Google generally takes
a trust no one policy. This is sort of
the way we operate our own internal network. So we apply this same
sort of security principle in the cloud. So for example, if you build a
cloud VPC and you build a VM, and you go try to SSH to it,
you actually can't SSH to it. You actually have to open
the firewall rules to enable certain ports, to enable TCP,
to enable ICMP, to enable SSH. All this stuff has
to be opened up. Everything is going be
locked down by default. So when you set up
a VPC and you want to sort of control
what can talk to what, you may build five
different subnets. And you're going to specify,
OK, I want only this subnet to talk to only
this other subnet. So you can sort of
create that granularity. So I talked about
that one customer that we met with,
where they had very specific regional requirements
for their application. So we presented that
option to them and say, you can really
get very granular. So you could build
one, big shared VPC and really build a
lot of firewall rules around that to
say things in EMEA can't talk to subnets
in the US or in APAC. Again, they decided to
go a different direction and we liked being able
to offer that flexibility. But you can definitely
get very granular. And you can define things
via ports, via protocols, via subnets. So the firewall rules are very
granular, in terms of the VPC. So maybe you don't want
to allow UDP traffic. OK, then you don't open
it up to allow UDP traffic or maybe, heaven forbid,
you don't want TCP traffic. You're not going have a
lot of communication then, but you could actually
lock that off. So you have that capability
to get very granular, in terms of how you write these
firewall rules to regulate what can talk to what
within that VPC construct. And the same thing happens when
you talk across multiple VPCs. You're going to have separate
security policies in each. So you may connect
two VPCs together, like I showed initially in
that very traditional model. If you don't open up
the security policies, you may have written
security policies in this VPC to not allow any external
IP addresses to get to it. So you may have linked
them together with a VPN, but your security
policies on both sides are not allowing
you to communicate. So this goes back to that
complexity and simplifying. If you have everything in one,
you can see one security policy that you're managing. Here, you're trying to manage
a security policy over here and trying to manage another
security policy over there. STEPHANIE WONG: Yeah. RYAN PRZYBYL: But to
answer the question, yes. It's absolutely doable. You can be very
granular in this. STEPHANIE WONG: OK, depends
on how your requirements go. All right. We might have time for
one quick last one. Is it possible to
create a mirror port in the Google network of a
server to another server? RYAN PRZYBYL: So port
mirroring is something that we're working on. It's not generally
available yet, but it's something that we
want to enable as a feature. So it is a feature request that
we have in front of our product team to say, we want to
enable port mirrorings, specifically on a large scale. Because typically when we get
asked for it, a lot of times it's around security and
various other things. So it's something we've asked
our product team to look at, to put on the roadmap. But I don't have a specific
date on when port mirroring is going to be available. STEPHANIE WONG: OK, no worries. All right. Well, thank you, Ryan. I think that's about
all the time we have. Thank you everyone
for joining us today. Please stick around. Our next session is
Protecting Your Workforce with Secure Endpoints. So thank you so much,
Ryan, once again. RYAN PRZYBYL: You bet. [MUSIC PLAYING]