[MUSIC PLAYING] DINO CHIESA: Many
companies are on a journey evolving their monoliths
to microservices or even managed microservices. And I'll say many companies,
but you can't really think of a single company
as on this journey. You have a lot of
different departments, a lot of different
projects, and each of them are in a different
place on that journey. Some projects, some
companies started from someplace different. Maybe they started from a
service oriented architecture, which is sort of a
half step along the way towards microservices. In microservices architectural
pattern, as you all know, an application is composed
of finer grained services that intercommunicate via these
lightweight, simple mechanisms that are called APIs. The goals of this-- increase product velocity,
manage complexity of code, perhaps polyglot
service construction, where you've got
multiple languages, different languages being used
to implement different services and working together,
and empowering developers in order to advance
business goals more rapidly. So that's all familiar to you. There is an
alternative, of course, to migrating a monolith
in decomposing it into a set of microservices,
and that is just wrap it. Just wrap it in a
service or in an API, and call into that directly. Just kind of freeze dry it
that way and get access to it. Either way, it's a service, it's
just a coarser grain service, and you don't need to
touch things as much. In fact, in the
keynote yesterday, we saw Jennifer Lind describing
this exact scenario with one of the tools that works
with Anthos, where you can take a VM, where you've got a system
already running, run this tool, and it pulls it into a
container and makes it hostable in Kubernetes Engine, regardless
of where you want to run it. So you wrap it maybe one step
along the path of the Strangler pattern, if you're
familiar with that. Wrap that monolith
and just put it wherever you want
in that container. So a couple of different
options, neither are wrong. Many companies have
different projects that are traversing this
path in different directions. The key thing to remember is
APIs are the communication contracts. So between the services
that are inter-cooperating, APIs are the things that allow
those things to communicate between clumps of services
or clusters of services-- maybe you've got them structured
in different meshes more or less formally managed-- APIs are used. And again, between clients that
are external to your network, to your meshes, it's API. So APIs are the interface,
not the implementation. APIs are the things that
allow people, parties to interchange information. Some of these APIs may be--
a really common structure for APIs is HTTP1 and
Rest, and maybe you're going to use JSON
for your data format. That is not presupposed
by the rest of what we're going to talk about here. That's a common one, but
that's not the only one. And API is just a programmatic
way to access the system. So we see a lot of
people using HTTP2 now, we see people using GRPC, we see
people using GraphQL, and lots of other options as well. So API is just the
communication mechanism. So I want to take a
moment to break down service mesh infrastructure
and look at what we want out of that. And to do that, I want to just
take a step back and start from the basics. Now, some of you may be familiar
with Istio, maybe some of you are running Istio. Istio is in production. We've got some CVEs
even that were updated, so you've kind of updated
some vulnerabilities that were discovered recently. Maybe you're further
along the path, but maybe I suspect that a lot
of you are looking at Istio, are examining Services
Mesh infrastructure, are kind of building your own
Services Mesh infrastructure with different proxies. Maybe you're using Nginx,
maybe you're using HA Proxy, maybe you're using
Linkerd, maybe you're using Envoy at its
base, and you're less familiar with all of what's
going on in Istio or in Service Mesh in general. So let's kind of take a step
back and just look at that. So the first thing is
services intercommunicate. So we have a bunch of
them in the constellation intercommunicating. We're just going
to look at two just for the purposes of
illustration here. And we've got that API
that's connecting these two. Once we have disconnection or
decomposition of the monolith into multiple
different services, things that we had some
assumptions about before, which is, for example, a connection
between different modules is going to be in
process and secure, those assumptions are no
longer accurate, right? No longer applicable. Now, the different
modules are services, are realized
services, and there is remote communication
between them, so we need to secure that. So if you're going to
go into microservices, one of the very first things
that you want to be able to do is have mutual TLS so that
both peers in the exchange can authenticate
each other, and you can perform authorization
decisions at the service level based on the identity that's
been asserted with the TLS certificate. So this is all just
kind of basic stuff, but it's kind of hard to do. When you have just two
services, no problem. You can issue certs,
they can compare them, you can make sure this is
the peer that I expect, and everything's good. As you add a number
of services, that starts to get a little
more complicated. But for sure, you're
going to want that. Ideally, you want to
provision certs automatically. So remember, keep in your mind,
we're looking at just two, but there's a constellation
of these things. There's 15, there's 20
different cooperating services, and I want to provision
search for all of them. And I want them to be distinct. It's cheating. I know there's some
of you out there-- it's cheating to use the
same cert for every service. You can't do that. That's not the way
it's going to work. It's not going to work well. So you need different
certs for each service, and you'd like to have
policies in place as well for these
inter-cooperating services. What used to be really easy? Just call the module. It's in process call. It's a method call. I'm calling to a
different object. What used to be super easy is
now maybe a little bit iffy. There's a network there. The network is not
always present. The instance is
not always there. So you need policies like, am
I going to retry this call? If I am going to
retry, how many times? What's the back off interval? What's the timeout? At what point does the client,
the initiator of the call, say this has failed. I'm going to declare
this call failed and maybe retry or
maybe just give up. How do I do circuit breaker? So if I continue to try
to hit the same instance and that thing is
just not giving me a satisfactory response, maybe
it's no connection at all but maybe it's
just an error, I'd like to kind of take
that out of rotation and not connect to
that instance anymore. If I'm doing Services Mesh,
I've got multiple instances. So I can connect
to different ones. But let me just take that one
out, just flip that circuit breaker and pull it out. I want to be able
to do routing based on maybe content or identity,
time of day, authorization. So all of those things
are sort of policies, and there may be more. And you kind of want to
implement all of those in the disparate services. And remember, each
of these services, we may be talking about
a polyglot environment. So while I am decomposing--
let's say I've got a Java app or I've got a .NET app, and
I'm decomposing the Java code or the C # code into
multiple different services. Then I can say, OK,
it's all one language. Super easy. But we all know that that's
not how it works in practice. Yes, you're going to
decompose an existing system, but then there's a new
team that's coming up and they want to use
a different language. They've got this Ruby module
that does just the right thing. They've got Python
that they want to try. They're much more
productive with that you're. Using Node.js, so
you can't have all of these provisions, all
these policies implemented in all those different
languages in the same way in a compatible way. So the thinking
here is that rather than build a library for
each one of those languages and platforms for Java, for C
#, for Python and so on, let's factor out all that capability,
all that smarts, the policy enforcement as I talked
about, the MTLS management, the session management. Maybe I've got requirements for
the particular algorithms that are used in the TLS connection. Let's factor that out
and put it in a proxy. And it's one proxy that
works with everybody, and we'll call it
a sidecar proxy. So this is the model that
works in a scalable fashion for Services Mesh
infrastructure. And in fact, this
is the Istio model. This is the model
that Istio uses. It uses that sidecar proxy. It's actually implemented
by a separate open source project called Envoy. All of that stuff,
the TLS management, the policy enforcement, all
of that is done by Envoy. And your service, you can
implement that in Java, .NET, Node.js, it connects only to
the outside world either with inbound or outbound
communication via that proxy, via that sidecar proxy. This is how Istio works. And Istio adds
that control plane that you see depicted
in the bottom here with a couple
of different modules that you can read about in
the Istio documentation. This isn't a deep
dive into Istio, but just kind of an overview
that allows certificates, for example, to be
provisioned for each instance, for each type of service in your
Mesh and authorization rules to be provisions centrally and
then sent out to the proxies so that they can enforce. If I'm service B, I'm
going to allow only service A to call me. I'm not going to allow
service C to call me or any other new service that
gets introduced into the Mesh if you choose to
restrict it that way. So those are the kinds of
things that the control plane for Istio do in this mechanism. You'll see those two
different diagrams, data flow and control metrics flow. The high throughput,
low latency calls, the things that are kind
of running your business, are all happening
in the data plane. One service talking
to the other, lots of services kind
of connecting together. You have this Mesh of
services all inter-cooperating at very high speed. That's all happening
in the data plane. The control plane
is just separate. It does not get invoked
for every communication, for every request. Most of that is cached. If you want to
make a change, you make it in the control plane,
and then it gets provisioned. The proxies get
updated, and they behave according to your
configuration on the next go round. So Istio also adds
some other things you're probably going to
want as a Service Mesh, and that is an Ingress Gateway. So now I've got this
Mesh, and I'm managing all the communications between. I've got TLS security, so
I'm encrypting all the point to point communications. I've got policy enforcement. I'm retrying the way I want. I've got back off intervals. I've got circuit breakers. But I also want to admit
inbound communications to that controlled Mesh, to the
thing that I'm managing nicely. I want to have an
Ingress Gateway, so Istio adds this as
well, adds this concept. And what it allows is
an internal client, some other client application
that's not part of the Mesh to send in requests. And again, you can have
policies there too that allow-- you can say it needs to bear
a JWT, it needs to bare a Jot, and it has to have
this kind of issuer. Or it needs to have TLS, or
it needs to have mutual TLS, and the TLS CN has
to be such and such. So you can do all that with
the Ingress Gateway as well. Likewise, you're
going to want to be able to control
communication that is initiated from inside the
Services Mesh and sent outside. In many environments, you
just don't care about that. You've got network firewall
rules running somewhere. But in some, you want to be
even more careful about that. And you may want to do
some routing decisions. So there's a way to manage
outbound communications as well. And this is what we would call
a Managed Mesh of services, or a Service Mesh. Some people say--
and I want to be particular about the definition
of the term Service Mesh. Some people think
of Service Mesh as you know, Istio
as a Service Mesh. And there may be other
tools that are out there, other pieces of infrastructure
that are out there that you call a Service Mesh. The way I think about
it is a Service Mesh is a mesh of services. It's just a bunch of services
that are inter-cooperating and intercommunicating. Service Mesh
Infrastructure is what allows you to manage all that. So that's just my
kind of phrasing, but you'll hear people
throughout the conference say, Istio is a Service Mesh. To me, that feels just
a little bit inaccurate, but I get what they're saying. In the rest of this talk, we'll
be talking about Service Mesh Infrastructure, and Istio
is a good example of that. OK, so that's kind of a brief,
quick overview of Service Mesh, and this is why you would
want to use a Service Mesh. It's connecting, securing,
controlling, and observing the services in that mesh. Next thing I want to talk
about is API management, and it's management
for shared APIs. So shared APIs may be
wrapped around legacy systems, monoliths, and what
we want to be able to do is allow external clients
to call into those systems. Why you use management? First, discovery. So an external
developer, we want to allow those external
developers to see what APIs are available and then provision
credentials for use by that developer
in the apps they build in order to make
calls into the API Gateway. We want to modernize systems. We want to report,
and we potentially want to monetize APIs. On modernization specifically,
we'll talk about a couple of different scenarios here. Maybe you've got
a SOAP back end, maybe you've got
an XML back end, you want to kind of
make it look like JSON. Or maybe want to add caching,
you may want to add routing. And modernization,
this aspect helps a lot when you're breaking down
the monolith as we'll see a little bit later. You can add routing right
in that API Gateway layer pointing some calls to the
existing monolith and some to the new microservices. So this is what it might
look like as an evolution of combining Services Mesh with
API management and API Gateway. So we've got our API
Gateway, external clients calling through there,
internal clients calling the systems that
exist, and we start to introduce some microservices. Maybe we're on that journey,
decomposing the monolith into different microservices,
and then maybe we're adding a few services. It's not just decomposing,
but it's adding a few things. It's augmenting where we're
modernizing and adding and extending, and so
now the internal app is going to be calling
into the existing monolith, the new services, and
the API Gateway can kind of direct and route
calls in as we see fit. We start to see
proliferation, though, and this is when
you're really going to benefit from that
Services Mesh Infrastructure. Something like Istio,
and that's when you can create these clusters,
these clusters of services, and establish those policies,
all the things that I talked about in the
beginning of this session. You can have multiple
different clusters, obviously. They can run in
the same Kubernetes cluster or different. You can run across clouds,
and Greg will actually show you a little bit of
multi cluster scenarios. This is what we think of
as the kind of clean model that composes API management
and Service Mesh Management, or Service Mesh Infrastructure
in an environment. So this is a clean
picture, super easy. A person came up to
me and said, look, I have a question about
maybe internal clients. Do internal clients if they
want to connect to my system, do they have to go out and
into the API Gateway just to get into the system? And the answer to that is no. We already talked about
the Ingress Gateway, that's part of Istio, and you're going
to have that with any Service Mesh Infrastructure. So internal clients
can continue to use those kinds of approaches,
but external clients for reasons we'll get
into will probably want to use something maybe a
little more structured in order to manage that
communication there. This is the mental model
that I like to offer people when they're thinking about
how services and APIs work in an enterprise. So at the base level, you
have your infrastructure. That's going to be your
platform as a service, your VMs, maybe it's Kubernetes
that you've chosen, maybe it's a composite
of those things. You've got different
projects and different people are using Cloud
Foundry, and who knows. You've got a bunch of
different infrastructure. On top of that, in some way,
you're building services. And what we like
to say is there's going to be thousands
of those depending on the size of your company,
and you want to manage those. You want to wrap services
management around those. Now, not all services
are going to be managed. It's an aspirational goal
to have all of them managed. Exposed by those
services are APIs. And in the same way we aspire to
have the thousands of services be under management,
we aspire to have the APIs that get shared
out of those services-- not the ones that they're
using to intercommunicate within the mesh, but the
ones that get shared out-- we aspire to have those
under management too. Not all of them will be. There are some pitfalls
associated to not managing your APIs, and Greg and I
can both attest to that, but that's the aspiration. So that's the model. Platform, thousands of services. Hundreds of those may be
exposing shared APIs that get consumed by clients that
are either inside the company but distant from the
teams that are producing or outside of the company,
some external client. And that's the mental model. Now with that, I'm going
to hand it over to Greg, and he's going to make this real
for us with a demonstration. GREG KUELGEN: Awesome. Thank you, Dino. All right, so I'm going to
go through this pretty quick in the interest of
time, but I want to set up with a little
bit of background in terms of what you're going
to see in the demonstration. Of course, we're going to use
the proverbial Acme Company. What haven't they done? They're a pretty
typical enterprise. They've got different
departments, divisions, locations. Today, we're going to
concentrate on two teams. We're going to concentrate
on the Product Team, who manage things like
products, product catalog, and the Logistics Team. They manage the
warehouse, delivery, those sorts of things. Now, Acme's feeling
a lot of pressure from startups kind of trying
to disrupt the market, and the business has
asked IT to move faster. So both the Product Team and
the Logistics Team took a look, and they said, you know what? Maybe if we modernize a couple
of these existing monoliths, we can move faster. And they made some decisions. They chose to go
with a Kubernetes based system using Service
Mesh Infrastructure and API management. The Product Team
has three services that we're looking at here. They have their product
service and then a location and a
pricing service. The main consumer of the
product service is a mobile app. It's an internally
developed mobile app. You'll notice in this case,
that mobile app is actually going through an
external API gateway, in this case, the Apigee API
Gateway, that happens to be running as Sas in the cloud. The logistics team has
an inventory service. They also have a
third party that accesses that inventory service
that helps them and needs accurate inventory counts. So with that, let's get
started with the demo. Can we go to the demo, please? All right, so we talked
about the Product Team. That's what we're
going to start. They are using
Kubernetes, and they made what is the obvious choice
for where to run Kubernetes, which is GKE. So they've got a few
different clusters here. The cluster that we're
going to worry about today is this dual cluster one. It's a four node cluster,
and let's just take a look and see what they've actually
got running in there. So you'll see just like
what we would expect to see, we've got a product service, a
pricing service, and a location service. Those are the pods that are
running in this cluster. So at first, they kind
of built these services, and everything was great. And then they the
Operations Team, the Security Team come along and
start asking questions about, hey, which service
communicates with which? How do I view the logs? All of that stuff. So they decided that
they're developers, they could develop libraries
and build all this stuff into their services, but
they didn't want to do that. So they chose to use Istio
as their Service Mesh Infrastructure to handle
all those things for them. So let's just take
a look and see what they've got running from
an Istio perspective in here. And you'll forgive
me, I'm just going to copy all my commands
because otherwise, you'll see lots of ID10T errors
as I try to type in. And if you don't
know ID10T is, you can type that into your phone
and figure out what that is. So you'll see here,
there's a bunch of pods running for Istio
that enable the Service Mesh. You'll see things like
Citadel and Citadel Pilot that you also saw up on
Dino's diagrams up there. And so now what I
actually want to do is-- I'm going to go on a small
tangent very briefly. So previously in my career, I
worked for a logistics company. And one of the things
I was responsible for was handling all
external communications with third parties,
customers, what have you. Now occasionally,
we'd have to set up a mutual TLS connection with
one of these third parties. That was pretty much always
my option of last resort. I hated it. And it usually went
something like this. So we'd get on a call,
and sometimes I'd have to explain to
them why I wouldn't create a private cert for them. And then we'd exchange cets,
and I'd load the key on my side, and they'd send something in. And they'd say,
it's not working. And I'd say, well,
you sent me y, but I see you're actually
sending me x when you send it, and they'll say, no, no, no. We're sending y. There's no way we're sending x. And that would go on from
anywhere between an hour to two weeks. So now, I want to enable that
same feature within Istio for our different components. Just a little bit of setup here. I've got another container that
is sitting in a namespace that is not Istio injection enabled. So I'm going to connect
into that container just to show you how it
calls the pricing service. All right, I'm in
that container. And I'm going to make a
call to our pricing service, and you'll see
actually it worked. So that's interesting. This isn't participating
in the mesh, but it worked. Well, right now we're in what
we call Permissive MTLS state. And what that is is, as
you're first bringing services into your Service
Mesh, you may have services that are outside that
can't communicate with that. And so anything that's
in the Service Mesh is automatically going to
establish an MTLS connection, but anything
outside can continue to connect any way
that it did before. All right, so with
that, I actually do want to change
this to strict now. And I'm going to show some YAML. There's only two times
that I show YAML. I'm sure if you guys have
been to sessions here, you've seen lots
of YAML probably so far in this conference. I have just a couple
of very simple YAML that I'm going to show. So this is the rule if I
want to make it strict MTLS. And basically,
all this is saying is that for the
pricing service, I want to set the MTLS mode to strict. So let's go ahead
and apply that. Yes. Maybe. Is that a little better? OK, so we applied that. Let's go back into
our container. And let's attempt to
access that service again. As you see, it did not work. So I'll tell you honestly. Every time I do this,
it brings me joy. I didn't have to
create a certificate, I didn't have to
set it up, I didn't have to negotiate with anyone. It just worked. That alone is like
the worth the price of entry into Istio and
Service Mesh Infrastructure. OK, but let's change
pace here for a second. We talked about the fact
that the main consumer of this product service
was the mobile app. So how does the mobile app? The mobile app, obviously,
isn't participating in the Service Mesh. So how does it get
access to these things? Well, that's where API
management comes into play. What you're looking at
here is a developer portal built using Apigee technology. This is just the landing page. I'm going to get into
this pretty quickly, so I want to see what
APIs are available. You'll see there's a product
API and a product catalog API and an inventory API. So let me click into
the product catalog API. There's some documentation here. So just briefly, all
of this documentation is being built off an
OpenAPI spec, version three. That OpenAPI spec is
actually hosted in the Apigee spec editor, so that's
just kind of how this is happening in the background. So I want to actually
give this a try. I'm the mobile app developer,
and I want to give it a try. But I see that there's some
sort of authentication required. So let's see. How do I actually do that? So I'm just going to
go to the quick start. OK, I see I've got to sign
in, register some apps. I know that already. So I've done that. I've created an account,
and I've created some apps. I'm just going to click
into this first app here. See a couple of
pieces of information. There are some API
keys, and then there's a couple of products
that I can choose from. At this point, I need to
back up just a second, because I've introduced a
couple of important concepts here from an API
management perspective. One, developer. This is our API consumer, right? So this is the audience
of this developer portal. So I created a
developer account. A developer wants to
have access to APIs. So a developer creates
an app, and an app is what provides that
developer access. When they create an app, they
will see different products that they can select. Now, what's a product, and
why do we call it a product? So a product is nothing more
than a collection of APIs, maybe sub-resources within
an API, maybe just one API. And basically, it's the way
within Apigee's API management to select what the app is
allowed to actually access. And why do we call it a product? Well, we've found that companies
that are the most successful with their API programs, they
treat their APIs like products. What does that mean? They have a product manager. They use outside in thinking. So they develop their APIs
with the API consumer in mind. They don't just say,
hey, I've got a system. It's got six fields. I'm going to expose those six
fields, and that's my API. So just some important concepts
and in terms of API management. So I want to actually
execute that. I'm going to copy my key. I'm going to go back
over to the API. I want to just execute
this products here. I want to authorize
this, click here. I want to enter my key. Wait, I don't even
have to enter my key. Since I'm logged in,
it knows who I am. It knows what apps I have. I'm just going to
select that app, click authorize, and then
go ahead and execute. And you'll see that I get a
response back from the product service. So right within the
developer portal, the developer can try it out. Now, I want to
make a point here. And this is that the
mobile team never talked to the product service team. They didn't have to
figure out how to do this. They came here. It was completely self-service. The product team didn't have to
be involved at all in allowing them to access this API. So that's API management. All right, so let's try to
go ahead and access that without an API key
through the app just to prove that indeed
it is going to fail. I've got to get out
of that container. All right, so failed
to resolve the API key. No surprise there. Let's go ahead and do
that with the API key, and I'm sorry my font's
going to get smaller here, because I've got a bunch
of different screens. But nonetheless, you'll
see that it got a response. And what I wanted to show here
in the upper right hand corner, this is the product service. So you see, we lit up
the product service. And then the product
service actually called the pricing
service as well. One more thing I
want to show here. And that is what happens if
I grab a specific product. So I'm going to come back
over to my shell here. So you'll see here, again, we
lit up the product service. We lit up the pricing service. And in this case, we lit up
the location service as well. So these are all
running in the mesh. They're all protected
by MTLS, and that covers the product team. So I'm going to move quickly
over to the inventory. And so let's see here. Let's take a look at
what these guys have. So the inventory team. Again, same thing. They're running on GKE. They have their
very own project, their very own cluster,
their very own mesh. We're going to go ahead and
connect into that one now, so I'm just going to copy the
command to set my credentials. Go ahead and execute that. Create some space
for us over here. And then I want to take a look
at the logs for the inventory service. And you'll see that indeed, we
had a hit over to the inventory service here as well. So let's go over here. So what does that mean? Well, it means that the
product service called over to the inventory service, right? And we know that there is
external access to this as well. So let's just try to go ahead
and access this same guy from outside and
see what happens. So if we go there, we see,
oh, permission's denied. The Apigee Istio adapter
in this case denied that, and it's looking for an API key. So wait a second. What does that mean How did
the product service call into the inventory service? Well, the way these
measures are set up, they're actually a
dual control plane setup What does that mean? It means you've got to
separate Istio meshes, but they're essentially
acting as one mesh. And the way that
that's done is through some magic with the core
DNS as well as sharing root certificate authority. And what we essentially
do is we create a service entry that advertises
the services that we're going to be calling. And I'm going to go
ahead and show you what that looks like here. And this service
entry in this case is actually going to be
deployed in the product cluster, because they're calling out
to the inventory cluster. So just two things I want to
point out in this little YAML. So one is the host. So it's inventory
service.default, which is the namespace,
.global, which is saying, hey, this isn't in my local mesh. I need to go look
somewhere else for this. And then I'm going to point out
this end point address here. That's actually the
Ingress Gateway. So you remember Dino talked
about the Ingress Gateway? That's actually
the Ingress Gateway to the Logistics Team's mesh
where the inventory service is running. So it's just that simple. It calls over there. Mutual TLS works
across these meshes just the way it does
within an individual mesh. So now that said, let's go
ahead and grab a cURL with a key here just to show that indeed,
you can also access this from outside the mesh. And indeed, so we
called with the API key, and you'll see we got a
product ID and a quantity back from the inventory service. So my point that I want
to make with that is it's getting the full benefits
of API management. That third party could go
into the developer portal just like we saw for the
mobile app, get their key, all of those things. But it's calling, and it's not
coming through the Sas Gateway. In this case, it's going
directly to the mesh, and the Apigee Istio Adapter
is verifying that key as well as sending
analytics back to Apigee to report upon
to the Gateway itself. So with that, it almost
wraps up the demo. There's just one
other thing that I want to talk about briefly,
and that is the product team, as they were going through,
they're actually modernizing. And they had an existing service
that was serving up the product catalog. It is actually a
fairly modern service. It was restful in nature, but
they wanted to change the API. They made some breaking
changes to the API as they were modernizing. So the first thing they did is
they used the Apigee Gateway, and they actually used
that to mediate and expose that API in the new version
to some new customers that wanted to come in and
use that in that way. So that was the first thing
they did before they actually built the service. And then they start to build
this new service in the Service Mesh here, and they
used the Apigee Gateway to actually help with
the migration process. And one of the ways
that they do that is based on the client
that's calling in, they can setup and say, hey. A percent is going to go
to either the old service or the new service,
so they can slowly roll to the new
service that way. So the app that I've
been using so far has been set to 100%
the new service, but I actually have another
key that is set to 50-50, so it's doing a bit of math. It's not always exactly 50-50. It's random in nature. But we'll go ahead and try to
call this a couple of times. In this case, it looks like
it's going to be exactly 50-50. So you see my first
response there, that was the modern service. We've dummied it up a little
bit here for the demo. It's got a smaller
response block, and it's a little faster. The second one is actually
going to the legacy service. It's got a bigger response
block just so you see that. Now, if we could switch
back to the slides, please. All right, so just
a very quick recap. And I went through
that pretty quickly. So mobile app calling
through the API Gateway to get to
the product service. That product service is living
inside of a Service Mesh. That Service Mesh, that
product service is actually calling out to another service
in a completely separate Service Mesh, but through
the dual control plane setup, it's actually acting
as one Service Mesh. And then there's
a third party that again, calls to that
inventory service, but it doesn't go
through the gateway. It uses the Apigee
Istio adapter. With that, I'm going to
call Dino back up on stage, because I've got some
questions for him. So my first question is,
when do I need Service Mesh Infrastructure? DINO CHIESA: So
my opinion on this is anytime you have
a significant number of cooperating microservices,
get a Service Mesh Infrastructure in place,
because you want that MTLS-- I thought you were going
to throw this to me. GREG KUELGEN: I was
thinking about it. I was going to throw it at you. DINO CHIESA: I was a little
scared there for a minute. You want that mutual TLS. You want to be able to
secure the communications between the two peers or
many, many different peers. And that becomes
really important. And if you get Istio,
then you get a bunch of other things for
free-- the observability, the aggregated logs,
and some other things. GREG KUELGEN: OK,
well that's fine. But my consumers, they need
a response pretty quickly. What about latency? DINO CHIESA: It's magic. There is no additional latency. GREG KUELGEN: Oh,
that's awesome. You heard it here first, folks. No additional latency. DINO CHIESA: Actually, OK. Let me just correct that,
just a little adjustment. There is a little
bit of latency. Small, but you're not
going to notice it. So small, invisible. GREG KUELGEN: Invisible. DINO CHIESA: No, seriously,
there will be some latency. There's an extra hop. There's a proxy there. You guys all saw
it in the picture, so there is going
to be a hop there. But it's a sidecar,
so that proxy resides in the same
pod with the service. So the hop to the proxy is
going to be relatively quick. The hop that the proxy makes
to the other proxy, that's the same hop that was
there in the first place. So the only change
is really the in pod hops that go from proxy to
service and service to proxy, and that we think is going to
be small, like sub millisecond. It depends, obviously, on load. But the question
you've got to ask is, is the latency
cost, which is real, worth the benefits that you get? And we think it is. GREG KUELGEN: OK, so is the
takeaway from this, then, that if there's a mobile
client, then use API Management? DINO CHIESA: Almost always. That's almost always the case. Now, if you have
mobile clients, that's not the only reason
to use API Management, but mobile clients in
particular introduce factors that you're going to want
API Management to handle. The perimeter security that is
required for mobile clients, it's beyond just TLS
enforcement or JWT verification. You might want to do things
like malicious content filtering or XML bomb
detection or adaptive rate limiting or traffic
pattern analysis, and maybe even token binding, where if
you get an Oath token, you can bind it to a
particular TLS cert. So lots of things on the
perimeter, API Management makes a lot of sense for that. GREG KUELGEN: Perfect. So then, what I'm hearing
is any external client uses API Management. DINO CHIESA: Yeah, probably. If you have external
clients, that means traffic is
coming from outside. Those clients are going to
be outside of your control. So even if it's a
client that let's say you built as a company and
you're giving it to consumers, it's running on
consumer mobile apps, as soon as you do that,
there's going to be a hacker. There's going to be a bunch of
hackers that kind of decompile that thing and figure
out what's going on and figure out the reverse
engineer of the API. So they're going to be
sending in APIs that look just like bona fide APIs, which is
why you want that perimeter security. So almost always. GREG KUELGEN: OK. Does API Management make sense
of all my clients are internal? DINO CHIESA: So if they're
all internal, probably not. But I think that that
scenario that you just raised seems to be unlikely. So the trend is towards
more and more connection, more and more clients. Any entity, any company,
any organization that is digitally isolated
is not going to thrive. So there's got to be some desire
to have inbound communications. Now, a project may be serving
only internal clients, and in that case,
you probably don't need API Management for that. But when you have external
clients, it sure makes sense. GREG KUELGEN: And before I
get to my next question, one of the things I like to say with
that is that your internal API today is your
external API tomorrow. You just don't realize it. And I've seen that over and
over again with customers. DINO CHIESA: A company builds
an API, it gets a lot of use, and somebody, some product
manager or executive says, you know what? We should extend that to
this partner over here. It is now external. So you're going to rapidly
need API Management there. GREG KUELGEN: So my next
question is kind of like, don't I get API Management with
Service Mesh Infrastructure? DINO CHIESA: Yes. You get some of the features of
API Management in the Service Mesh Infrastructure. So you get authentication,
you get the TLS capability, you get observation,
access control. But as we said, in particular
in the things that you showed, you want to self service
enable developers to go figure out what APIs are available,
get credentials and embed those credentials
into their apps. So it's not always
going to be an API key. It might be a different
set of credentials. It might be a public
private key pair that you're issuing
as credentials. But you want to have
that be self-service. You don't want to have people
e-mailing around saying, hey, I need credentials, and here
we're going to send an email. That's not secure. That's not the way to do it. You also need a different
set of reporting, monitoring for
APIs that are being used by external
clients, shared APIs. Business people
are going to want to see what's the trend in
the number of developers that are using my APIs, the
number of different partners. Or for errors, is there
a particular developer that has built APIs that
is causing more errors? Different kinds of
monitoring and reporting is enabled by an API
Management platform. That's probably not something
you want on the internal APIs. So you do get some API
management capability in Service Mesh Infrastructure,
but it's probably not enough when you start to share out. GREG KUELGEN: So there's
some overlap there. So I'm going to go
back to latency. Again, my consumers
are super demanding. Don't I get latency incurred
with API Management? DINO CHIESA: Latency free zone. GREG KUELGEN: Again. DINO CHIESA: Yes. Magic. GREG KUELGEN: I'm impressed. DINO CHIESA: No, so there
is obviously a latency. Every time you add an additional
hop, there is a latency. And what we showed in
the demo, what you showed was a cloud-based API
Management Gateway. So it was running somewhere
on an external endpoint accessible to the internet. GREG KUELGEN: Magic. DINO CHIESA: Anything
that was calling in had to go through the public
network, get into that Gateway, and then into the internal mesh
or the services in the mesh. So yeah, if you have an internal
client and it's doing that, it's going out the
public network, coming in through that
gateway, and there's going to be a hop there. So yes, there can be. But not necessarily. So at this conference,
as you know, we've announced the
beta availability of the Apigee Hybrid
Gateway, which is the same Gateway that we
just saw you demonstrating. Same capabilities. But it is able to run inside
the Kubernetes cluster. So it's not a sidecar. It's not running right
next to the service. It's still a shared
Gateway, but it's in the cluster, which means
it's on the local network. It's going to be fast. It's going to be low
latency, and you're going to be able to stay
on the private network. So if you have
sensitive traffic, there's no regulatory
issue there as well. GREG KUELGEN: That's cool. DINO CHIESA: Yeah, really nice. So yes, there's some latency. Again, go with a cost
benefit analysis, and there's a
mitigation where you can run that Gateway inside
the Kubernetes cluster. GREG KUELGEN: So I think
we may have covered this, but do all of the
API requests hop out onto the public network? DINO CHIESA: I think
maybe we covered this. So the answer is,
not necessarily. If you use the Apigee
Sas service today, it's going to happen. But if you run that gateway
inside, then you don't. GREG KUELGEN: What
about, is Istio required? DINO CHIESA: So Istio is
something that you used. And I kind of broke it
down, did a quick overview of what it does. It's not actually required
in order to manage services. You can manage services,
different companies use different things. They may use their own approach. They may use HA proxy, they
may use Nginx, and couple that with some log
aggregation tools and call that their
Services Mesh. There's nothing wrong
with doing that. And, in fact, if a
company's large enough, there's going to be
heterogeneity in the approaches that they've taken. So it's not absolutely
required that you do Istio. We think Istio is a good idea. It's being built to address
the 80% case at least for most people. So we think it's
worth looking at, but it's not
absolutely required. You can still do Services
Mesh with your own toolset and compose that
with API Management according to the patterns
that we just described. That still is going
to work great. GREG KUELGEN: So I think this
is maybe my last question, but this sounds
super complicated. Do I really need
all of this stuff? DINO CHIESA: Nope,
you don't need it. If you don't want
to conduct business, you don't need any of this. GREG KUELGEN: All
right, awesome. There we go. And on that-- DINO CHIESA: Look. As soon as you have
multiple services, you're going to want the
capabilities of a Service Mesh Infrastructure. You're going to want that. You want to protect the
transmissions between them, and that's the
best way to do it. As soon as you
start sharing APIs, you're going to want
API Management software. Now, if you're a startup, sure. Be thoughtful about the
things, the components, the tools that you embed
into your architecture. You don't want to build
the world at first. Maybe you're on
this MVP approach. OK, I get it. Maybe you can
economize, minimize, do it yourself on
different things. If you're an enterprise,
you're a big company, you have governance
requirements, you've got regulatory
requirements, you need tools, infrastructure
to help you govern and manage and control those
sorts of things, and you're probably going
to need both of these in the right doses now. So this is how we
think they compose, and this is probably a good
recommendation for most big companies. It's not about having cool
tools and cool infrastructure. GREG KUELGEN: And
I like cool tools. DINO CHIESA: I know you do. It's about getting
things done more quickly. Composing Service Mesh
Infrastructure and API Management allows the business
to innovate more quickly, and that's what we're
talking about here. GREG KUELGEN: Awesome. DINO CHIESA: So with that kind
of back and forth dialogue, Greg and I tried to summarize
the decision criteria for when you want to use
more or less API Management. I think the call on services
management is really simple. If you have more than just
a handful of services, you're going to want some
Service Mesh Infrastructure. You're going to want
something like Istio. At least Envoy, or
some other proxy, and you're probably going to
want a control plane as well. Istio is worth looking at. But for API Management, it's
a little bit more nuanced. Where you apply it
is maybe interesting. As we said, big enterprises are
probably going to be using it, but the question is where. At what layer? At what interface? And these are some of the
decision criteria summarizing what we just talked about. [MUSIC PLAYING]