[MUSIC PLAYING] ADAM ROSS: So show
of hands, who here is running a monolithic
application, a big single piece of software? All right. And how many of you have
started to split that off into smaller services? OK, so that's a
good number of you, but there's still
a lot of people-- MARTIN OMANDER:
Almost everybody. ADAM ROSS: No, no. That was-- MARTIN OMANDER: What was it? 50%, 60%? ADAM ROSS: I think half. MARTIN OMANDER: Half. OK. ADAM ROSS: So half have
started, and half are ready to see where this goes. So today, we're talking about
migrating to microservices in the context of serverless. Serverless is a great fit
for building microservices because of how
quickly you can get something started and launched. So here, we have
a Cloud Function. It's just a couple
lines of code. But it's enough to get a
proof of concept going. MARTIN OMANDER: Is that
a complete microservice right there? ADAM ROSS: Yes. It's not a very smart one. It's just sending some
environment variables along to the requester. But that's enough to
integrate with something and see if you can
use microservices with your application. And if you want to deploy
it, that is a one-liner. A few flags there for the
first time, but not very hard. So today, we're going
to follow that step and further journeying into
the world of microservices with Into The Woods. They're an online
camping gear store. And they have been doing a lot
of great business investing in their technology
to drive commerce. But before we talk
about them too much, it's worth knowing they're
an imaginary company. They're going to be
an example for us to follow through their
journey and then figure out which lessons they've
learned might best apply to your own organizations. And since they're a
monolithic application, they have now is starting to get
technical debt a little fast, slowing down how fast
they can build software. It's time for them to start
looking into what else they can do. And so one of their
developers, Tess, is going to lead us into
those new approaches. MARTIN OMANDER: It's a shame
they're totally imaginary. I really like that backpack. I'd like to buy that. So Tess, the developer,
she has this thought about, microservices could help us
with release velocity and dev velocity. Now, she needs to
study up first. What are these microservices? So let's go through
some quick definitions. Monolith, that's what most
of your applications-- you raised your hand-- that's when you have
a big ball of yarn and everything's
sort of connected or could be connected
to anything else. Microservices are when you
have specific services that are independently built,
deployed, operated, and scaled. Microservices could
take requests. They could potentially
call each other, as well, to fulfill requests. Let's zoom in on one
of those microservices. What's inside a microservice? There are four things that
we think are very important. There are lots of
definitions out there. Adam and I think that
there are four things that set a microservice apart. One is it has an API. If you're going to talk
to the microservice, you talk to the API. No other way in. It has some compute resources. So this is a way to actually
run code, run business logic. Most microservices,
virtually all, will have some kind of storage-- database, file system, in-memory
cache, something like that. And then the fourth thing is
actually a non-technical thing. We think it's very important
that the microservice has a team associated
with it as well. So if this service breaks, then
we know what team to talk to. If this service needs
to be refactored, then this team has
full ownership, and nobody else can come in and
override what they are doing. These guys are the experts
on this microservice. So we're going to follow Tess's
microservice journey here. She's going to lay
the foundation that will build some brand
new functionality as a microservice. After that, Tess and team will
take existing functionality in from the monolith and break
that out into a microservice. Tess will sit down
and plan with her boss for the future of what
to do with microservice in the future. And finally, there'll be
some contemplating about what we have learned here today. So first, laying the foundation. Doing this sort of digital
transformation, where you break up a monolith,
you change everything, the way all developers are
working at your company, also change the way the ops
teams are working requires-- that's not a big bang. That is like 100 small
brown bag lunches, a lot of explaining to people
who don't know what these are, getting the team onboard. After that, Tess would go and
talk to her manager about, I want to go the
microservice route. The team thinks we should
go the microservice route. Her boss is Sang, who is
the CTO of Into The Woods. Now, this conversation can
go one of several ways. Tess could go to
Sang and say, we should break up our monolith,
microservices are great. What would Sang say? Do we have any CTOs
in the audience? AUDIENCE: Yeah. MARTIN OMANDER: Oh, yeah? OK. Very good. I think you would say, CTO there
in the back, yeah, of course, let's do it. But what's the ROI? Is that right? OK, very good. And this is a hard
conversation, because we don't have hard numbers. As developers, we all
feel that microservices are easier to manage and build. But it's really hard to
prove that's the case. So Tess bides her time. A few weeks later,
Sang comes to Tess and says, hey, the
business people, they want a
recommendation engine. So based on what's
in the cart, we should show
recommendations to users. Now Tess is in a much
better place, because she can say, huh, me
and the team, we have this great
new approach that will give us greater dev
velocity for this new feature you want to build. And now, Sang is interested. So [? Beth ?] tells him more. ADAM ROSS: Yeah,
it's great that Tess has this opportunity with a new
feature that might be valuable but isn't mission
critical immediately. Because it could be faster
for her to build this as part of their existing application. But as a new service, it'll
take a little more time to get right. But in the long term, maybe
it'll be more sustainable. Tess's approach is
going to be iterative, using a lean approach
of making sure that each step of the
way through this they've made the right decisions and
are building the right thing. Initially, the main concern is,
will this particular approach of building recommendations
as a microservice work on a technical level? And furthermore, will it work as
a way of having multiple teams collaborate? Can the API contract prevent
having a lot more meetings as a result of having two
different software systems? MARTIN OMANDER: Ah, we
don't like meetings, do we? No. ADAM ROSS: No. And if that works out, it's
time to move forward and start to see, can this recommendation
system drive some money? So Tess is using a serverless
approach in her exploration here. And as we discussed
earlier, that has a number of advantages,
especially for an R&D effort like this, where she doesn't
have the backing of a big ops team and doesn't have a
large pre-established budget. Serverless allows a low
infrastructural management solution and a
pay-as-you-go model, which means the experiments they
run are the experiments they pay for. There are three main
serverless options on GCP-- Cloud Functions, App
Engine, and, as you may have heard this
morning, Cloud Run, a new product that allows
serverless containers to be run. For now though, Cloud Functions
seems like the way to go. This is a very highly scoped
recommendation service. And that fits with the
tight focus of a function. Here's the architecture of
the recommendation service. The monolith is going to make
requests of the recommendation service, sending the product
IDs from the shopping cart. And the recommendation
service will send the recommended
product IDs back. For now, this is going to
be a very simple system. Dave from marketing
has a couple ideas of what they should be
recommending that day and is going to
tell the development team to go ahead and deploy
those recommendations. MARTIN OMANDER: So these are
hard-coded recommendations more or less? ADAM ROSS: Yes. But the important thing is
that those recommendations eventually show up on
a shopping cart page. MARTIN OMANDER: Hm. ADAM ROSS: So there's always a
few challenges from that CTO, or from any kind of
technical review. Can you really
proceed with this? Will it work out? What if that new
service doesn't respond? Well, as a slightly
lower priority service, asynchronous requests from
the monolith with a time-out will allow it to move on if
that service has an interruption and does not respond
with a recommendation. Is there a scaling challenge? Well, a serverless
approach allows you to scale up horizontally
quite quickly and then back down again to zero
when you're no longer making recommendation requests. And for setup and maintenance,
well, Google engineering is helping you with that
infrastructure challenge. So you don't have to carry
a pager for the networking issues. So we're back to the
code shown earlier. This is the first iteration
of a recommendation service. And it is deployed and
working successfully. So with that as a success,
it's time to move on to the next iteration. Can they actually drive
some business value by iterating this service
forward and making smarter and more
intelligent recommendations? If that works, make
it even better. And if not, maybe
their algorithm isn't working out so well,
only recommending socks all day long. MARTIN OMANDER: Well,
you do need socks, lots of socks for camping. ADAM ROSS: So the
architecture that we have for this second iteration
might look a little like this. This is probably familiar. It's the same API as we
saw on the earlier slide. But because the
recommendation service is separated from
the monolith, it can change its
implementation and get more intelligent without the
monolith being any the wiser. Now, instead of Tess
deploying changes when Dave from marketing
makes requests, Dave has access to a Google
Sheet, into which she can enter the
recommendations, using it as a quick and dirty back office
application, instead of Tess and team needing to write a
whole additional application and database to pull that off. Martin, could you show
us what that looks like? MARTIN OMANDER: Yeah. Switch to the demo, please. So I am now wearing
the marketing hat. I'm Dave in marketing. Up here, we have the URL
for the microservice. The only product in the
cart right now is a map. So if we hit the
microservice, it returns that if you
have a map in your cart, a compass is recommended. That is because
Dave has entered, if the requested
product is a map, the recommended
product is a compass. And if you have
more products here, so one product was the map. Another one could be boots. And then we get the
compass and laces. So boots leads to laces. Now, Dave loves this, because
he can go in here and say, well, actually, we are
rebranding our store. Wouldn't it be
cool, Dave thinks. We should be a
nautical camping store. Let's send a sextant to
people as a recommendation, if they have bought a map or if
they have a map in their cart. So now, when we reload,
now we see that a sextant is recommended here. This gives marketing
great power. They love how they can edit
these as many times a day as they feel like. Let's have a look at
the code for this. Switch back to the
slides, please. So the code to read from a
Google Sheet is quite easy. First, we create an auth object
here with a spreadsheet scope. Notice that there
are no spread-- sorry, there are no passwords. There are no OAuth keys. There are no secrets,
none of that stuff, because Tess just
gave access to-- for the service account
that this code runs as, she added that service
account as a collaborator on the spreadsheet. And boom, all the
access is taken care of. After that, we need to
create a client object. And then here is actually where
the call to Google spreadsheets is done. And we need to send two
things along as parameters-- the spreadsheetId, of course--
that's that long ID-looking thing that's up in
the address bar-- and then the range, so
what columns and rows we want to read. Once you have that you have a
response.data.values object. And in this one, you have
a two dimensional array of the cells from
the spreadsheet. Put a little set logic
and stuff on top of this, and you have the
service we just ran. And it clocks in at-- how many
lines of code was it in total? ADAM ROSS: Less than 100 lines,
especially without the 30 more for authenticating to
an API on top of it. MARTIN OMANDER:
Yeah, nobody likes to write authentication code. So this is a great success. Sang, he loves it. He says to Tess,
this is amazing. I love this approach. And Tess, of course, she
wants to build on that success and say, yes, yes,
there are many more ways we can use microservices. We still have this big
monolith over here, remember. Now, Tess and
team, they sit down and plan out the third iteration
of the recommendation engine. They have a few different
things they could do. Like, machine learning perhaps
could drive recommendations instead of the spreadsheet. They could drive it off
of the current customers' previous orders. They could analyze
orders in aggregate. There's so many
things they could do. But before they have
time to build version 3, it's time to break out some
logic from the existing monolith. Because Sang comes to
[? Beth ?] and says, look, we are really having a
problem with payments. Payments are just not working. We're losing revenue
right and left. Could you sprinkle some
microservice magic on payments and make them work? What will [? Beth ?] say here? Yes, of course. Let's do it. This is a great way of
getting more microservices into the company. So first off, she looks at the
existing code for payments. And I don't know,
many of us have worked in existing code bases,
and we may have seen something like this. Sprinkled throughout
the monolith are various payment
operations that go off to the external
payment provider. But also, there are
these weird ways where some squiggly
line goes off through some weird
libraries that we think hits the payment processor. There is also this odd thing,
where this piece of code that hits the previous
payment processor, we don't think it's being
used, but who knows. It's checked into
source control. There's also this other
line off in a module that's not about payments at all that
goes off to somewhere else, and we're not quite sure where. This is the way existing
code bases often look, right? It's sort of accretes over time. And nobody really knows
what's going on in all places. And nobody dares go
in and delete it all. So Tess, she thinks, we can
really get some focus here with microservices. Let's see how that
would be done. ADAM ROSS: You know, it
seems like one of the biggest challenges is that payment
is really critical, but there's no one who's
owning all those pieces. Well, with a
microservice approach, you can at least have one
service in the code base, in the solution, that
someone, some team, can own and make sure it
doesn't start becoming squiggly. So here, we only have
very straight lines, showing the API calls from
the monolith making requests to the payment service, which
does all the interactions with the payment processor. That also has the
nice result of being able to swap out that
payment processor, if you want to make a change,
and only touch the one service. That API, that needs to be
thought through, though. What exactly are those various
operations this payment service should support? Well, before getting into
a technical design session, we need to be strategic. A technique called event
storming might be used here to figure out what are all
the different kinds of things that might belong in
a payment service? And this requires both
the technical experts and the subject matter
experts to get into the room. The technical experts
might know specific things about the existing
implementation and have a lot of insight into
the internal implementation events that matter. But the domain experts are going
to have a much clearer idea of all the things that haven't
been possible to do so far or might be really
important to do in a year that could be critical
to inform the architecture. So once you have a
long list of events that all seem
related to payments, it's time to start separating
them out into two piles. What are the events you'll
include in your payment service? And what are the
events that you're going to exclude as being
not quite focused enough to belong here? So you might authorize payments,
charge a previously authorized payment, refunds, cancellations. But excluded from that, an
inventory function, order status and shipping. These are things that might be
intimately related to payments but are a different domain
area with different owners and different problems. Maybe those are microservices
for the next exercise. MARTIN OMANDER: All right. So Sang, the CTO
and Tess's boss, he's really encouraged by this. Now, we actually know what
we're doing with payments. But, he says, there's
one thing missing here. There we go. There's one thing missing here. Today with the monolith, we
are dropping some payments. Now, you're breaking
out the payment service from the monolith,
which is great. But we could still
have this problem that we have with
the monolith today. So what if the monolith sends
an authorization request, like authorize this credit
card number for $100, and somebody put a comma in
the wrong place in the payment service? Or the network is down? Or the payment
processor is down? Then a 500 is returned
from the service. OK, that's good. But we're still in this
position where we've lost money. We have a payment that
hasn't been authorized. We cannot charge
this payment later. How does your microservice
magic fix that? Tess thinks this through,
reads up on it a little bit, and she comes up
with the solution. Asynchronous messaging
is the way to go, because instead of just having
the monolith talk directly to the payment service, it can
go through Cloud Tasks, which is a fairly new product in
the Google Cloud Platform. Let's see how that would work. So the monolith
would send a message, authorize this credit card
for $100 just like before. Fire and forget, needs
to do this once only. Cloud Tasks would
then send that message on to the payment service. Somebody put a comma wrong. There was a network
error, whatever. There's a failure. The payment service is
a well-behaved service. It returns a 500 server error. Now here, things
are different, start going different from
the previous slide. The Cloud Tasks component
doesn't give up. It will keep resending that
until there is a success. And then as soon as the status
code returned from the service is in the 200s, that's
how Cloud Tasks knows that, OK, I can
lean back, I don't have to send it any more times. ADAM ROSS: Hey, Martin,
how long will it keep trying those requests? MARTIN OMANDER:
Yeah, you can set it. But at a maximum,
it can be 30 days. And you can set it to
shorter if you want to. You can also adjust the
exponential backoff behavior here. OK, so Sang says, good,
good, that will work. And you know, I talked
to the ops team. They really liked this
microservice approach. But, Tess, those Cloud Functions
we had before JavaScript-- we uploaded them to Google,
we didn't really know where and when they were running-- the ops team, they want to
be a little more hands-on and have more control
of the stack here. How can we get that with your
microservice approach, Tess? Tess thinks about it and
says, well, it actually-- do they want containers? And Sang says, yes, containers
is exactly what they want. But you know what, containers
is usually a lot of work. We want that sort of low
ops you've been telling us about that serverless has but
the control of containers. How would they do that? ADAM ROSS: Yeah,
Tess doesn't really have time to stand up a
Kubernetes cluster today. Luckily, there's a new
solution for this-- Cloud Run, a new
beta product that marries the worlds of
serverless with containers. You can wrap your
service in a Dockerfile, build a container image,
and throw it at the cloud. And it will answer
HTTP requests. And then if this service
were to keep growing and the Into The
Woods ops team wanted to get more heavily
involved, Cloud Run on GKE would allow them to manage
that Kubernetes cluster and move those containers
over as they deploy them to the new location. So using Cloud Run, we
have all requests coming in from the monolith,
hitting Cloud Tasks now as a way to ensure the
resiliency of the system. And then Cloud Tasks will
pace out the requests to Cloud Run, which will then
send them on to the payment processor and
stores status events in Cloud Firestore, a
unstructured serverless storage option. MARTIN OMANDER: So what do you
mean when you say serverless for the database there, Adam? ADAM ROSS: A lot
of database systems aren't really serverless. And serverless compute systems,
like Cloud Run or Cloud Functions, are
going to scale much higher than a typical single
instance is able to go. So that's where
Firestore comes in. MARTIN OMANDER: Cool. ADAM ROSS: It also has
some nice features, like eventing on updates. So you might wonder, oh,
what kinds of crazy hijinks do I need to get to in my code
to build a Cloud Run service? MARTIN OMANDER: [LAUGHS] ADAM ROSS: This is a pretty
simple almost "hello world" like application
written in Go that is going to respond
to any request with "Payment Approved,"
which is how we all would like payment processors
to authorize our credit cards. MARTIN OMANDER: So there was
no Cloud Run-specific code there that I could see. ADAM ROSS: No, there
is absolutely none. The one wrinkle perhaps is that
the port that your HTTP service needs to bind is going to come
in from a port environment variable, which is
a little unusual. But it's still
flexible, and you can override that as you need to. So this service needs to
be wrapped in a Dockerfile. A Dockerfile is a relatively
straightforward package manifest using industry
standards around how containers work, be it GKE or Cloud Run. And this has two stages,
a little more complicated than most Dockerfiles. But this has a
build stage, which is going to compile our
Go code into a binary; and then a production
stage, which pulls from a stripped down
variant of Linux called Alpine, copy in that binary,
and then run it whenever this container is started up. Deploying is two
commands here, one using Cloud Build to take all
of this service and Dockerfile, upload it to Cloud Build to
have a container image prepared and sent on to Google
Container Registry. Then to deploy, gcloud beta run
deploy takes that container, puts it into Cloud Run,
and starts you serving. I have found that the first
time might be a little bit slow, but most deployments
take about 30 seconds. So that service, ready to go. Let's start rolling
out into production. Now, as a whole new
service and doing a mission critical function,
you might not want to have it immediately take
over all payment processing. So the monolith is going to
need to be slightly adjusted to send just some of the
traffic to this new service. That means, unfortunately, going
into all the gnarly old code and making adjustments there. Hopefully, you've already
got an audit in place to figure out where all that
is so it can be deleted. MARTIN OMANDER: So they deploy,
they do this canary release. They send 1% of the traffic
through the new payment service. It works great. They send 5% through
the new payment service. It works great. 10%, they-- eventually, 50%. They send 100% through,
and it's working. No payments are dropped. Everything scales beautifully. The ops team is happy,
because they have containers and the magic of serverless. This seemed to be a
very easy rollout. But we all know that
there is no such thing as a trouble-free rollout. There is always something
that doesn't go exactly like you expected. In this case, Thalia,
the controller-- we've seen her before-- she
comes into Tess's office and says, hey, all my
financial reports are broken. This was a week
after they ramped up. On Friday, she comes in,
all my reports are broken. Why is that? Tess starts looking
at this closer and sees that, ah,
one of the reports actually has two pieces of data. One piece lives in the monolith. And the other piece now
lives in the payment service. And before, when everything
lived in the monolith, it was far easier to
run these reports, because both pieces of
data were in the monolith. How do we fix this? This, by the way, if
you've been working on migrating to
microservices, you will have seen this problem
in one form or another. Or if you haven't
yet, you soon will. This is data fragmentation-- big problem, especially
for reporting. So of course, the problem here
is we have the data over here on the far left,
and on the right is Thalia, who wants
her report, right? Well, it turns out-- Tess looks into this,
and it turns out that Cloud Firestore
has triggers. So whenever something's
written, she can trigger, build
a new service, an ETL service,
that is triggered. It can sanitize the data. It can do some light
data transformation and then put the data in a
data lake, or reporting service if you wish. The monolith has an
existing reporting module. That one needs to be
repointed to this data lake, so it sends its transactions
and its orders to the data lake. Now, all the data is in the data
lake, the reporting service. This is kind of like
in the monolith days they probably ran a
separate reporting server. This is the equivalent
in serverless land, microservice land. Now that all the data
sits in the data lake, they can now build the
reports off of that. So Thalia had to go without
her reports for a few days. It was not ideal. They had forgotten
about reports when they did the event storming. But they didn't lose any money. Black Friday rolls around. Now, we're running
into a big problem. Sang goes into Tess's office
and say, all the API calls to the payment processor are
returning "quota exceeded" errors. What should we do? All hands on deck. What is going on? This is something that if you're
migrating to microservices, if you haven't run into
this sort of problem before, you soon will. The problem here, of course, is
you have a chain of components talking to each other. And they all have different
scaling characteristics. So some of them scale very
well, and others not so well. For example, they may
have a contractual rate limit with a payment processor. You can only send
one transaction a second or something like that. You might also have seen it if
the component on the far right is a SQL database that's
not serverless and doesn't scale as well. You might have seen
it if you connected to an HR system or a CMS
system or other system that can't handle the kind of load
that serverless can handle. What do we do? If you read and check
out the literature, you will see lots of
thoughts on how to fix this. Like, you can
write all this code to write a circuit
breaker component. Or you can implement
back pressure that leads back through,
that ripples back through this chain. There are so many
things you can do. Most of them involve
a lot of coding. And we all know, if you
write a lot of new code to fix something,
you might fix it, you will also introduce
a lot of new bugs. Then Tess remembers,
they picked Cloud Tasks. Cloud Tasks actually has
this knob you can turn. You can adjust the rate. So the monolith can throw
any amount of transactions per second at Tasks. Tasks will absorb it. And then it will only send
out to Cloud Run as many as it's asked to. So [? Beth ?] goes in and
checks the Cloud Tasks console. And she sees that,
oh my gosh, we have a lot of backed up tasks here. 2,500 transactions are
sitting there and waiting to be authorized or charged. Because they're
using Cloud Tasks, they can go in and change,
turn that knob and set max-tasks-dispatched-per-second
to 1. The default is 5. They turn it down to 1. Now, all of sudden, they're
only sending 1 per second to the payment provider. The payment provider
accepts the traffic. And after about
45 minutes or so, they will have worked
through this backlog. Excellent. Tess is still a little
shaky, shooken up over this. But Sang says, this is
actually a great job. Any major transformation,
any major migration will have some
hiccups along the way. This microservice approach
lets us move faster and experiment more. He loves it. Tess, for her part,
she's just very happy that this worked out. ADAM ROSS: Well, now that Tess
has been undoubtedly promoted to chief microservices scientist
of Into The Woods, it's time to start thinking about a
more deliberative approach to how they do
microservices in the future. So one lesson that she might
have learned from all of this is that a microservice
is a good fit for well-understood problems. A user profile service
might be easy for anyone to understand and
anticipate in advance, so maybe that can be created. But for harder
challenges, for areas that need a lot more innovation
and exploratory development, trying to stay on the
path of a microservice might be a source
of slowdowns, might get in the way a little bit. So it's important
not to always be too focused on building a
microservice on the things you don't yet have a mastery of. Moving from there,
Sang and Tess, they might have talked
about how many microservices do we want to have? Should we have 1,000 by the time
we're done with all of this? Well, feel that out as well,
move a little bit more slowly. You see, the trade-off
of microservices is really between developers and
operators, or least development and operations. When you have a
small service, it is much easier for a
developer to really get into all of the code and
develop an expertise in it. And so when you have
more small services, development complexity
might go down. But as you add more and more
services and more interactions between them, you start to gain
more operational complexity-- more deployments, more
cascade failure possibilities, and harder troubleshooting. So there's a sweet
spot in there. That trade-off is
going to be very unique to every
organization, because it's very particular to the people. Do you have more senior
operations engineers or more senior application developers? Between those two, you'll
find your sweet spot. Now, with a serverless
approach where you've taken a lot
of that ops overhead and pushed it onto the cloud,
that line for ops complexity will flatten a little bit. And that means your sweet
spot can slide over. And it becomes more
reasonable to have a larger number of
microservices in your arsenal. So as they've been
adding new microservices and splitting some
off from the monolith, each of those services
has a stakeholder. This is an owner, whether
it's on the business side or otherwise, that
understands that service and the role of that
technology in the company. They can be a champion
of that technology through the rest of the
company, but they can also be an ongoing subject matter
expert for the service. This kind of direct
line of ownership also prevents a lot of
confusion about what priorities are for an individual service. MARTIN OMANDER: So
what did we learn here? What are the lessons? Tess sits down and thinks
carefully about this. This is lessons
that we hope you can apply in your organizations. One is Conway's law. This was created
by Melvin Conway. He wrote it down
over 50 years ago. It is as true today as when
it was first written down. Any of you who has
worked on systems, or if you worked on a
website for an organization, you've experienced this. What this law is
saying is that, if you have a certain type of
organizational structure, that organizational structure
will lead to-- any system that you built will be a mirror
image of that organization. You've seen this when building
websites for companies. The website hierarchy,
page hierarchy tends to follow the org chart. So if you have many
small teams, you will get many small
microservices. That means that you can
actually do an inverse Conway maneuver here. If you want small services
over there on the right, then you create
many small teams. If you want a big monolith,
you have just one big room full of developers. And everybody's
on the same team. Next observation is that
weightlifting is really hard if you do it by yourself. But weightlifting
becomes a lot easier if you have like five or six
of your friends to help you. So the monolith is really
doing a heavy lift here before, but it's far easier when the
other services are helping out. As a matter of
fact, the monolith doesn't have to do
as much work here, because the other
services help out. You might even discover that the
monolith is home sick one day and can't come in
and lift weights. And the other services
can sort of do the lift. And eventually, you
might have only services. That's far into the
future for Into The Woods, but it's something to
consider and think about. It's also something that we
need to be a little careful when we talk to the
monolith team about. We don't talk about how we're
going to shut down their stuff. As a matter of fact, this
also has another name that you might have heard of. That was a little too
confrontational for a slide, we felt. Martin Fowler calls
this the strangler pattern. This is where you have
a tree in the forest. And all these vines
grow up around it and suck energy from
the tree and eventually killing the tree. And then all the
vines are there. But team weightlifting-- [LAUGHTER] Let's talk through that instead. Then you need an intrapreneur. So an intrapreneur is somebody
who has an entrepreneur mindset but works inside
an organization. This was Tess in this case. You need that intrapreneur. You also need that opportunity. In this case, it was the
recommendation engine. And that set them on the
path to microservices. So you can be the intrapreneur
in your organization. So keep looking for
those opportunities. [MUSIC PLAYING]