Neal Ford - Building Microservice Architectures

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon everyone I'm going to my name is Neil Ford and now I'm going to talk about building microservice architectures this is for those of you saw my keynote this morning this is an example one of those evolutionary architectures I talked about but I'm going to go into a lot more detail about this specific architecture today and what I'm going to talk about this afternoon is first of all what problem are we trying to solve here you shouldn't just adopt a micro-service architecture because it's the new shiny cool thing on the block it's there to serve a solve particular sets of problems and in fact there are some sets of problems that it specifically is not well-suited for I'm going to talk about some of the characteristics of micro-service architectures and then talk about some of the engineering practices required to make it work one of the unique things about the Micra service style of architectures is it's the first post DevOps revolution architecture meaning that the architecture takes all the DevOps engineering practices as part of the architecture it's the first post DevOps architecture and it won't be the last one because I think going forward that's going to be an integral part of architecture because it gives you some unique advantages so let's talk about what problem we're trying to solve here with microservice architectures and I'm going to first talk about the more traditional service-oriented architecture that was all the rage say five or six or maybe ten years ago and service-oriented architecture was all about taxonomy it was all about categorizing services and having very specific responsibilities for each layer of these services and so I'll talk about each of these layers braid briefly the top of this service-oriented architecture with these business services these are abstract enterprise level coarse-grained services owned and defined by business users these were the abstract things that you do as a business so for all of these things like execute trade or place order the common tests that you apply to those is can you say I am in the business of executing trade or placing order so those are these abstract things that we want to do business and frequently these have no implementation details sometimes they're specified in metadata formats like wisdom or XML or B pal or something like that the next layer of this architecture are the Enterprise Services these are the things that actually implement the details of those top-level coarse-grain services these are coarse-grained still but these are concrete implementations and these are owned by shared services teams this is very frequently the land of integration architecture so it may be that you're integrating with PeopleSoft or Oracle financials to make some of those things happen and these are services like create customer or calculate quote or validate trade then you have these application services these are concrete application level fine-grained services owned by application teams these are one-off kinds of services that a particular application needs like geolocation or search or something like that but you don't really want to expose to all of the services in your ecosystem so these are generally bound to a specific application context meaning they are attached to a single application these are things like ad Drive or update address or calculate sales tax next you have infrastructure services these are the plumbing services they're required in the architecture this is logging and auditing and all the non-functional requirements these are typically concrete enterprise level fine-grained services owned by some sort of shared services or infrastructure team and these are basically your non-functional requirements and then finally the defining characteristic of this style of architecture is this message bus thing in the middle which may do 0 to all of these things mediation routing and choreography and orchestration and several different levels of method and message enhancement and transformation and up to protocol transformation transferring between serialized object in JSON and all these magical kinds of facilities and so when you look at a service-oriented architecture you have to ask the question now what problem are we trying to solve with this architecture what led us down this path what was the advantage of having an architecture like this now one of the advantages of this architecture is it allows you to maximize reuse to the maximum degree at least this was the goal in this architecture because what you really wanted to do is be able to build these enterprise services and then kind of stitch them together to satisfy those top-level coarse-grain services and so there was a real desire here to maximize reuse and maximize why I call canonic ality a single sources of truth for things this is also a time when operations was a more expensive and harder than it is now because it was very common in this time that you were dealing with operating systems that had licensing for users and licensed software for database servers and application servers so there was a real desire to try to maximize reuse onto machinery as much as possible so we built a lot of really elaborate shared resource kinds of architectures the problem though that this kind of architecture introduced well that incremental change is hard here if you start thinking around domain lines like customer or orders or catalog or something like that where does customer live in the service-oriented architecture well it lives in all sorts of places here it's part of us in these abstract pieces partisan these concrete implementations so there's no real clear defined concept of a single domain entity here so if I want to make just a change to something in customer I have to touch lots of moving parts and when I touch those moving parts I have to make sure they all get deployed together correctly and there's a lot of operational complexity here and this is the other downside to this style of architecture it's a very operationally complex because these different parts typically run on different machinery whether virtualized or otherwise and a lot of integration architecture headaches to go along with that which brings me around to the problem we're trying to solve with microservice architectures micro-service architectures were heavily influenced by the book I mentioned this morning that they designed by Eric Evans which is ostensibly an object modeling technique a way of taking a big complex problem and breaking it down into smaller pieces so that you can understand it more effectively and as I mentioned this morning as well there's a really good guide to implementing this in software by Valen Vernon called implementing domain driven design this idea from Deming to design that my slide show stopped for some reason at this idea from domain driven design has been really really influential on architects over the last few years is this bounded context idea and in fact that's really what the micro service architecture style is is taking this logical bounded context idea and making it a physical reality using the engineering practices we find in continuous delivery so let's look at the characteristics of a micro-service style of architecture in contrast to the service-oriented architecture we saw before this is still a distributed architecture like a service-oriented architecture meaning all these guys are living in in their own processes this API layer that you see here is purely optional in many diagrams you don't see that at all this is generally just there to facilitate calling through something like a user interface to get to this constellation of services the group of services that are running here there is specifically no logic in that API layer so this is nothing like that message bus that we saw before for reasons I'll get to in a second but you are absolutely forbidden in this architectural style in its purest form for putting any sort of logic in that API layer and I'll show you why in just a second typically the communication between this API layer is something like HTTP rest or messaging over a message bus and I'll talk more about that in just a second all of these components are completely separately deployed and that includes the API layer at evil at every single one of these micro service components because they're completely decoupled from one another operationally the service components here may consist of several different classes and hierarchy typically the goal here is to have there small chunks of functionality so you typically won't have really large hierarchies in here you could in fact have a framework inside there if you had some sort of need to do some really fancy calculation or something like that it would not be unreasonable to embed a business rules engine and one of these if it needed to handle some sort of specific task like that the real defining characteristic of this architecture though is that domain driven design bounded context idea and this is why you never see any logic in this API layer in this architecture because everything that pertains to a particular service is supposed to be part of that bounded context and that may include things like databases because the persistence is part of the bounded context and so if you completely decoupled that from everything else you get a really really nicely decoupled architecture that doesn't relieve you from the responsibilities of doing things like backups and restores and all the other things that you're supposed to do with data but what it does is remove responsibility from the databases from the DBA group and makes it the responsibility of the individual service teams which frees them from choosing their own flavor of data much more effectively but notice it not necessarily just a database this may be so for example maybe the service component we have here is a search component and maybe that's a search engine than it needs to be able to boot up and use to be able to facilitate search the idea of the bounded context is operationally when you deploy that component all of this dependency to deployed along with it as a single unit because the goal here is I shared nothing kind of architecture I say this has shared nothing but there is actually two things that we end up sharing here and I'll talk about those things specifically in just a second but from a technical architecture level we're trying to decouple all these components as much as possible service orchestration in this world means that components call each other to get stuff done there are a couple of different flavors of this which I'll talk about a little bit later either orchestration or choreography but notice there is no a central calling authority like a message of us here and so these services have to call one another if they need to coordinate with one another so when you look at monoliths versus micro-services monoliths are very much a shared resource kind of architecture whether you're talking about something that's in an application server or just a monolithic application in general and the way that you scale monoliths is to get multiple models together and build clustering protocols between them the microservices world though is the the opposite of that the idea in the micro services worlds is each service runs in its own process and this is really because of the evolution of the way that operations has happened over time because back 10 years ago operating systems are expensive commercial things you have that licenses for and so there was a really strong effort to get shared resource architectures to work effectively so that you can have fewer licenses to a database server fewer license to an operating system maybe remember the days when database servers were priced by a number of users and it's by number of CPUs and then the power of the CPUs and more and more Byzantine combinations of those things but then something happened this is one of those inflection points that I talked about this morning that upset the equilibrium in the software development world operating systems open-source operating systems which have been free for a long time suddenly gained a new superpower with tools like puppet and chef now you can spin them up on demand very easily and store operating systems a source code make easy changes to them and have single sources for them and so when operating systems are free why not run each service in its own operating system is a single process because one of the headaches that you find out in a system like this is the unexpected interactions that pieces have together and if nothing ever has an interaction with it then you have more certainty that you're just running one thing and if operating systems are free then why not do that level of isolation now we see this down to the level of containers running one thing per container as the representation of a pristine operating system another one of these perspective shifts that's quite common in the micro services world is this project sources products perspective that I mentioned this morning projects tend to work in silos organizations where you have a group of developers who hand something off to someone in operations and then they reform and build another project somewhere else that they're going to hand off to operations it's very easy in these organizations for blame to happen because if something doesn't happen well it's that silos problem and there's a lot of coordination between those silos it's much more typical in micro service architectures to have these cross-functional inverse conwy maneuver teams that I talked about this morning Amazon's two pizza teams is a good exemplar of this there's no team at Amazon builder than bigger than 10 or 12 people and they're all cross-functional meaning that they own their service entirely so they have an Operations person in a DBA and they have developers and whatever resources they need to be able to build and put their service into a production and along with a product owner notice that this removes a lot of the needless bureaucracy that exists and a lot of siloed organizations when developers are trying to get things deployed and operations is stonewalling them and preventing deployments because they're trying to batch things up all that goes away here because every team is responsible or deploying their stuff on their own schedule because it's part of their job to deploy the things that they're building another common characteristic in this world are smart endpoints and dumb pipes this is again this idea of message buses is not really they don't like it in this world because what you're doing there is smearing you're bounded context out across your integration architecture so we tend not to like enterprise service bus --is in this kind of architecture we like smart endpoints and dumb pipes the tip transport here is HTTP when we get away with it because it's an easy to debug but if you need a binary for performance or because of bulk of data or something like that lightweight message queues are commonly used here now if you're in an architecture and you have an Enterprise Service bus if you don't use any of the intelligence in it and just use it like a dumb pipe then we're not mad at it anymore the problem we have with things like Enterprise Service bus is is that information that you're including in the Enterprise Service bus is really part of a bounded context and you're violating that bounded context philosophy of micro services by taking some of that context externalizing in another moving part we're trying to break the coupling between moving parts in the architecture here to create the shared nothing kind of architectures and in fact it's quite common in this world to standardize on integration and not on a platform and so it's quite common that you don't standardize on a single implementation language or even development stack so some companies will build some of their services in Java the Google's NGO language has become a very popular destination for building little tiny services because it has a very few dependencies you can build standalone executables it has a very efficient tool chain now we're not suggesting that every single development team gets to pick their own special magical development stack that no one else in your organization can possibly understand but if you look at most large organizations and look at the spectrum of too few languages and platforms versus too many languages and platforms most large companies fall way too far on the too few versus the too many and so what we're trying to do is push this back more pragmatically toward a good balance for what if your company is my colleague Sam Newman has written a really good book from O'Reilly about micro services and I'll give you some of his advice along the way and here's the first bit of it from Sam which is to standardize in the gaps between services and be flexible about what happens inside the boxes and this is in fact one of the two things we do share in this style of architecture I said it was a shared nothing architecture before but here's one of the things that we do share which is how the services integrate with one another so it's quite common to pick rest level two or rest level three here's a common integration style because what we really want out of these micro services is this kind of Lego ability of being able to pull one service out and put another one in its place and they have to know how to integrate with one another to be able to facilitate that and so that's one of the things that we do share in this architecture is how do services integrate with one another I'll talk about the other one in just a second but Sam also says have one two or maybe three ways of integrating not twenty this is not an excuse to go off and build your own special software stack what we're trying to do here is write size technology choices for problem the problem you're facing and he also says pick some sensible conventions and stick with them which i think is good advice another common characteristic of these micro services is decentralized data management it's very easy when you have a monolithic application architecture or monolithic data base to do transactions but that becomes much more difficult in this microservices world and so you typically our eventual consistency environments here rather than hardcore acid transactional kind of environments here in fact that is Sam's advice avoid distributed transactions if at all possible in this world this is really one of the trickiest things particularly if you're migrating from a monolithic architecture into something like a micro service architecture because one of the things that you'll find is that you have multitude of problems around different kinds of coupling certainly you're aware of the kinds of afferent and efferent coupling between classes that exist within your architecture how classes depend on one another and the spiderweb of dependencies they create one another but that's only one of the kind of couplings you have to worry about as you transition from one architecture to another the other another one is temporal coupling how things are transactionally tied to one another and so if you are taking a monolith and migrating it toward a micro-service architecture the first cut should be around transactional context not around structure preserve the transactional context as long as you can and then do whatever level of distributed transactions you can get away with or just cheat and say okay these three services shares the same relational database because it's too hard to break that transactional context apart you're going to end up creating way more trouble than it's worth so it's probably better to cheat on the purity of the architecture than to try to invent some sort of distributed transaction mechanism that works across a widely disparate set of transactions having said that you should try as much as possible to embrace things like eventual consistency because a lot of these transactional problems go away if you can avoid distributed transactions and go toward eventual consistency and you see a lot of sites do that even large e-commerce sites like Amazon for example you don't really get into a transactional context until very late into the the purchasing process everything else is handled much more synchronously another really common characteristic and micro service architectures is infrastructure automation using things like deployment pipelines to make sure that you get as efficient a throughput in your engineering as possible to get things out into production deployment pipelines facilitate that by creating this kind of gated environment where you keep trying more and more sophisticated tests as your as your system moves through that deployment pipeline this is also critically important in a very common pattern that exists in micro service architectures and that is the combination of a micro service architecture and continuous deployment in other words as you make changes to your micro services you want to get those into production as quickly as possible some of you familiar with this this document that we create that thought works called the thought works technology radar we just finished the session on it last week in Chicago so you'll see another radar sometime near May first from I thought works but one of the things that we're talking about and our most recent radar is doing QA in production because if you can do continuous deployment you can actually do QA against your production rather than waiting and staging it in somewhere and in particular if you have things like feature toggles in place go ahead and put that stuff into production and there's just QA a branch of production that has the feature toggles turned on that has a way less uncertainty than trying to create a production like environment and then queuing it there and then still worrying it when it goes into production if you can just do QA in production that's even easier but to be able to do that kind of stuff you need automation and this is what I was trying to hand waving this morning trying to describe this is a really common pattern in a deployment pipeline where I'm doing continuous deployment I have Micro Services and making changes to some services but I want to make sure that it works against what's currently running in production because I'm going to continuously deploy it there but it also needs to implement the new features that I'm going to eventually want correctly as well and so I'm testing in two different possible scenarios that's what's referred to as a fan out in a deployment pipeline you can do several things in parallel as efficiently as possible if both of those tests pass in those environments to make sure that it can integrate successfully you fan back in and do the deployment phase as a singular thing and then get it into the actual environment so that you can do stuff with it having this infrastructure automation in place gives you the freedom to make changes to these services on a really rapid basis which is quite common in this world these micro services are small single responsibility things and you hear several different measures of this you hear people say things like small enough to fit inside your head so small that you would rewrite it over maintain it so the idea is that if you know what a particular service the new ver of a particular service needs to do why are you bothering opening the old one to read that because that's old news why not just crumple that up and throw it away and write the new one if it's only 50 or 100 lines of code why are you bothering to look at the old one to figure out what's no longer true just dispose of it and put the new one in place your people give various ranges of lines of code which has always been a terrible metric for anything in software the single responsibility thing is really the thing to key on because one of the things you'll find if you're doing this migration is let's say that you have a monolith now and you're migrating some sort of more service-based like a micro service kind of thing single responsibility can do you really well here because the first cut of your monolith may be a large chunk of single responsibility so maybe you take your monolith and break it into three smaller things each of which has a very broad but a single responsibility and then take each of those things and break them into smaller pieces and so this becomes a good guideline regardless of how big your services are a single responsibility and a domain focus is really a good way a good lens to look at those three so that's a bunch of the characteristics of this architecture so let's talk about the problem we're trying to solve here what does microservices give you that other architectural styles haven't and it's this idea of being able to maximize easy evolution at the architectural level this kind of Lego ability take one of these guys and pull it out and pop another one in its place and no one notices that that happened so for a long time in software architecture one of the ways that we define software architecture is as the stuff that's hard to change later and that's true in a service-oriented architecture and in a monolith and in many architectural styles but it turns out if you build an architecture with change built into the architecture suddenly change is not expensive anymore that's exactly what we've done in microservices is built an architecture where change is now not only inexpensive but expected as part of the architecture now because of this it's operationally complex because you have to automate all these machine creation and service discovery and logging and a bunch of stuff I'm about to talk about in engineering practices but the real advantage you get here is you can evolve this an eerily incremental kind of continuous way so this is the first architecture we're change is a first-class citizen so I say this is the first architectural style developed post continuous delivery it won't be the last I think these all architectures will take this into account going forward well so let's talk about some of the benefits of this style of architecture and here's a case study it was done by my colleague James Lewis many of you're familiar with a Martin Fowler article that introduced many people to microservices it was co-authored by my colleague James Lewis and he did a presentation and that the paper is available and it's also on video there on in oq info you sorry this is for a banking application this is a micro service architecture these are the four applications config call center marketing and reporting and here are some of the micro services and I'll give you some of the details for a few of these this configuration service had to handle a thousand transactions per second with a 99th percentile of latency of less than two seconds this is a big financial services company this user service had to handle a hundred million active customers not a hundred being concurrent customers fortunately but this is a large financial company this raw transaction store service had to handle bulk loads of 30 to 90 million records every night and so it not only had to handle that volume but it couldn't fall over and die because that would prevent them from getting all these records loaded if you're in the bank you can't get behind on that so there's a really strong SLA for that oh because this is financial services the entire thing had to be PCI level 1 which is a really strict level of compliance for services kind of applications but notice this is one of the real advantages of the micro service kind of architecture because this raw transaction store they had to build a whole bunch of really robust stuff around that to keep it alive but because it's operationally decoupled from everything else that didn't side effect ripple into all the other services so each one of these services could be tuned to what we thought it's volume was going to be what level of scalability is required for that service versus the other ones we didn't have to build those characteristics into the entire architecture and take on that extra complexity because these are all operationally distinct you can build really unique characteristics at the service by service level if you need to another good example of a quite common pattern in this world is let's say that you're running the widget Co site and you need to draw the home page for widgets for your users and you have a micro service architecture and a user makes a request into your architecture and so you make a request out to a bunch of services and you start a clock running the clock is really important because you have a specific SLA to get this page drawn and I saw the actual number for Amazon recently but amazon knows down to the microsecond how long they have to draw their page before an unacceptable percentage of people will get bored and wander away and buy a book somewhere else they know exactly how long they have to get the most rich page done in the right amount of time and so what that's what the clock is for and so for example one of these service calls in widget CO may be out to give me best-selling widgets because I'd really like to put a best selling widgets chart on the page to let my users know what the best-selling widgets are but I don't want to punish my users to wait for best selling widgets if it can't come back fast enough so another one of these calls is 2d cached best-selling widgets as a two hours ago and it comes back almost right away so some of these calls come back right away and others start coming back in a regular way they don't displayed yet their aggregate to get ready for display and then eventually the clock expires it's time to render a display some of these services may try to still send stuff in but I'm sorry you were too late this time I had a strict contract I had to me maybe you'll get back soon enough next time so that I don't have to look at the cached best-selling widgets I can look at the real number the point is I don't want to punish my user for waiting for something that they might not want and so that's part of the strategy here in this world timely partial results are better than slow complete results because I have I don't want to punish my users for things information they don't need and when you return here is typically optimized for ranking and aggregation and not necessarily for display you can do some really magical things with highly asynchronous kinds of architectures and while this is not strictly a micro service architecture a good example of how asynchronously can benefit you have you ever wondered how Google does that semi magical did you really mean this thing that they do the way they do that is when you make a search request you don't make a single search request that spawns several hundred including common misspellings like transposed letters and misplaced owls and if they get really good search results back for one of your misspellings and really bad search results for everything else they'll say you probably meant this because I got a lot of trash back that what you really typed in I think you really meant this I would think about how many searches they're doing concurrently for every one of those they're spinning off multiple concurrent requests it comes back really fast that's this idea of building really really highly asynchronous environments but here's where you get to the real benefit of this architecture one of the things that the closure community gave to us besides closure which is a really cool thing the creator of closure is a really loves words and so he resurrected this english term back from the 1800s called complex which means to intertwine or embrace or braid things together and he suggests that many problems and software is where we've accidentally complected things together and one of those very commonly for many of you is you have complected deployments because when you release components you're also releasing that feature to your users at the same time well that takes on more risk than necessary if you could separate those two things you would incur less risk if you could deploy things and make sure they work operationally correctly and then turn on features later then you're incurring less risk by doing that and that's exactly what these kind of micro service architectures allow you to do and this is in fact a really common pattern these micro service architectures where components are deployed into a production ecosystem very frequently continuously deployed meaning that as soon as I've made a change to that code it has gone through a deployment pipeline and all sorts of testing scenarios and probably some integration testing with some other things and then it has been put into my production ecosystem live it still has an impact in anything yet though because the features are still inside it so at some point later you can expose features by turning on feature toggles which is also quite common in this continuous delivery world what that allows you to do is run several services side by side with almost identical capabilities but you still haven't really impacted users yet because applications in this world typically just consist of the routing between the services that you have running and so at any given time you may have several different variations of a service running and different routes to different services this is the thing that enables you to do things like a be testing which is also quite common these kind of architectures because you roll out this new variation and then you have what's called a feature toggle router at the beginning of your application that says ok 20% of people get this new feature and let them try that out and see if they like it better but 80% get the old one that's really common in this world that you manipulate applications by the routing of services that they take and this is the idea of evolutionary architecture I was talking about earlier today that you have a service you roll out an enhanced version of that service and then services that need that capability can slowly migrate to the new version that service over time at some point no one is routing to the old version to service anymore remember I said part of the workings of this architectures operations and DevOps this is a good example the monitoring that has to happen in the operations world lets us know when no one is routing to that old service anymore which allows us to disintegrate it automatically out of the ecosystem and so this allows us to put services out there and then get rid of them when they're no longer needed which creates this very evolutionary style of architecture what this suggests is if you have the right engineering practices in place a larger the number of services you have you drive your risk of release down because when you're deploying a monolith it's binary risk it's all-or-nothing it either works or it doesn't and there's a binary prospect there but the larger number of services you have you're displacing less and less of your ecosystem every time in particular if you have good testing in place this actually drives your risk of release down the more services you have so let's talk about some of these engineering practices that are so necessary to make this architecture work one of the things is quite common in this architecture is designed for failure clients need to respond gracefully to provider failure we do this through things like monitoring we're very aggressive about monitoring for example like business relevant monitoring we will look at any sort of business metrics we can gather as part of our deployments and try to monitor those things for example we had a client not too long ago that had a rent a subscription based business and they had a monitor in place that said if they roll out a change and the number of subscription re-up takes drops more than a certain percentage they automatically roll that change back they don't know that that change caused the problem but it turns out they like money a lot and every time the money slows down they get worried and so they want to make sure that they understand what's going on to make that doesn't happen again we also do architectural monitoring this is this idea of monitoring routes and services to see which ones are being routed to so we can disintegrate them we also do semantic monitoring these are things like for example it's really common in a restful kind of world to have a whole set of integration tests when you have version Tim points to make sure the version didn't point to getting resolved correctly we want to take some subset of those development time tests and run them against the production infrastructure to make sure that's still true in production particularly for critical things where problems prop up a lot like version resolution for endpoints we'll take some of our tests and run them against our production infrastructure just to make sure they still work correctly I'm machine monitoring before monitoring of course is tricky on a monolith it's virtually impossible at a microservices world without some help Sam's advice here is you have to get much better at monitoring what that suggests is you need tools to do this many of the things in the micro service world are impractical without tool support one of my colleagues says it will that if you're deployed more than three services by hand you've already lost because you will it's not uncommon in these kind of worlds to generate thousands or tens of thousands of of monitor trips and log messages a second fortunately there are a lot of good tools to handle this tools like a log stash and Cabana lets you filter and aggregate these things together and get a nice rolled up picture of what's going on is it particularly true of aggregating monitors because you will probably want to monitor individual services to make sure you understand their characteristics but you also want the ability to aggregate those calls together so that you can find out how long a single request for example happens through your ecosystem and so many of these tools support the ability to create a monitor per service but also an aggregating monitor so you can get the total number of for example a request or your ecosystem capture metrics and logs for each node and aggregate them to get a rolled up picture is advice from Sam another common engineering practice in this world of synthetic transactions particularly if you have a basic availability kind of a eventual consistency kind of world you want to make sure that transactional things end up as looking like transactional things you know eventual consistency doesn't mean is you know the transaction is going to show up in a week or two it means you know within a few seconds but you want to make sure that the end result of that operation is the transaction that you wanted that you know the washing machines that were purchased we're in fact purchase and debited correctly that's this idea of synthetic transactions build into your architecture this kind of a flag where you can flip it and say okay I want this to go through and do all the normal processing and when gets the very end there's a cheque that says are you synthetic and if you are a throw it away otherwise I'll commit it to a database somewhere which triggers a bunch of other interesting stuff this allows you to trace the behavior in production of the way transactional behaviors work so you still take transactions to test production systems this allows you to make sure that the things that are supposed to act trans actually do in fact act transactionally there's a famous story about this be sure that you remember to flip that flag to say this is a synthetic transaction there was a client who they were running a test and if I got to flip that flag and a truck showed up at the home office and started unloading 200 washing machines and there's no no actually we don't want the 200 washing machines that was a synthetic transaction so be careful that you actually flip the switch when you need to flip the switch a really common pattern in this world as I mentioned before it's quite common to do things like a/b testing in this world in fact where did you get scalability in this world as you spin up more instances of services in fact that's a really common pattern in this world as well is that you operationally so services that are going to get a lot of traffic for example probably have a built in input queue and when the input queue starts filling up you monitor that and spin up more instances of the service and start spreading out those requests over those instances of that service you can do elastic provisioning and destruction as you need to operationally but this is where we're saying the the architecture really relies on aggressive and agile operations so what that means though is very often if you're doing something you may go through many different services to achieve the same result because there may be dozens of instances of services running and you may want to know exactly which service you touched as part of this for debugging or traceability or something like that that's what correlation IDs are really good for it's quite common to as you start a request into this world tag on a correlation ID and every one of the services that touch it logs that correlation ID that allows you to the log aggregation tools to filter for that correlation ID and find out every instance of the service that was hit as that request went through so this allows you to track down nasty bugs that may exist only in certain kinds of scenarios me if you'll renege recognize many of the things I'm talking about here they're all basically ideas and extensions of ideas from release it the book by Mike Nygaard things like circuit breakers and bulkheads and timeouts are all good practices in this world from a DevOps standpoint to make sure that things stay up and running as part of this services micro services architecture so I said earlier this is a shared nothing architecture but then I backed off that and said well really we share two things and one of them is how all the services integrate with one another here's the other one so let's say that you're in this architecture world and you've encouraged this kind of a polyglot world where different people can use different languages but you have one development team who creates one of these services and forgets to add all the monitoring hooks into it and then puts it live operationally that thing is invisible and is in fact falling through a black hole so we need to make sure that we have assistant consistency across services for example we mentioned they all need to know how to integrate with one another but they also need to know how to call other services how do you discover other services find out what instances are running that's going to be consistent across all services we also need metrics and monitoring and logging and all those things to be consistent in fact it becomes a real headache if we can't enforce consistency across all those services so how can you make sure that all those services share the same technical architecture characteristics this is where service templates come in drop wizard is a good example of one of the spring boot is another good example of one of these while this allows you to do is take the shared technical architecture stuff that you want present in each service and kind of snap it into a template they don't share any of these things across services but it does allow you to drive consistency across those services this is Sam's advice consider service templates to make it easy to do the right thing so this is the thing that's aired by that's owned by a shared services team and they make sure that everybody uses the same version of the logging tool in the same version of the monitoring tool and enforces that level of consistency and there's another common use here as well so one of the usages for that message bus and a service-oriented architecture is to reduce technical duplication so let's say that you have one service that talks JSON and another that wants serialized Java objects and that pattern shows up four or five times in your architecture in the micro services world if you really want to be diligent about shared-nothing you'd end up repeating that Java serialize Java to JSON code in every service which is not very efficient what you can do is take that shared code and put it in the service template that allows you to share it across all services and have a single implementation of it so you're still coupling them a little bit but it's really just at the utility level not at any domain class level so I think the restrictions still is here there's absolutely no coupling between domain classes here but you can use a service template to reduce duplication for those kind of utility coupling like transformations or utilities like XML parsing and other sort of mundane stuff that all of them need as a side service so let's talk a bit about orchestration orchestration is how stuff calls other stuff and let's talk about two styles of orchestration they're common in the micro-services world your choreography orchestration we'll talk about orchestration first so let's say that we have a microservices doing an insurance kind of workflow where when we change an address we also need to recalculate any existing quotes because their address the quote rate may be tied to their address the updating claims they're participating in and we need to notify the insured and there are two different ways to handle this coordination at least two two common ways to handle this coordination and one of them is called orchestration where the front calling service becomes the orchestrator for all the downstream services so in this case change address does okay well that means after I've changed the address we need to recalculate the quote we need to update claims and I need to notify the insured that all these things have happened so this first guy becomes the orchestrator it is called orchestration because it's like a conductor of an orchestra so he's the one that handles all the downstream processing and any sort of errors other stuff that pop up the alternative version of this is much more event-driven much more like an event-driven message queue a much more common in these kind of Broker topologies where each one of these services knows what the next thing that needs to happen this is a much more publish and subscribe kind of environment where people publish things onto message queues and then services listen on message queues to respond to things so when this world change address knows that well the thing I need to do after change addresses recalculate quote and notify ensured and I'm done recalculate quote knows okay well I need to update claims or more likely post a message to you that up big claims will pick up and then respond to but in this world notice all three of these send out a message to the notification service down here because there's no coordinator to handle this you might have to build some logic in here to make sure user doesn't get three distinct notifications maybe you build some logic in there that says will keep notifications for eight hours and see if I get more than one so I can send a single message or something like that so coordination logic has to go in a different place here so really what this boils down to is either mediator versus broker - apology which is quite common in the event-driven architecture world you see those same kind of patterns in the micro-services world as well so let's talk a bit about deployment and deploying these micro services one of the things that you don't want to do is force whoever's doing the deployment understand all the different shapes of all your services all the different implementation details and libraries etc instead you want to create a consistent deployment mechanism whatever that happens to be Sam's advice here is abstract out the underlying platform details to provide a uniform deployment mechanism and uniform maybe something as cutting-edge and as as cool as docker containers that's become a really popular thing to say well just we just deploy docker container so put whatever you want in a docker container but it doesn't have to be as elaborate as that it could be that we just run the chef script and it builds up whatever you need to deploy your code the James Lewis talk that I referenced earlier that's a purely Java ecosystem and his consistent deployment mechanism is that every service lives an executable jar file and so you just go to that directory and say run all the jar files and they bootstrap themselves they all have an embedded web server and listen on a port and so his consistent deployment mechanism is just make sure they're all executable jar files another piece of advice here is don't let changes build up there's always a temptation we have a staging area and you have multiple in deployment pipelines feeding into that too let things start building up in the staging area it's like well we could put that well let's go ahead put this well you know this is almost done so it's and so you eventually get a bunch of things built up here you're much better off deploying these things one at a time particularly if you have continuous deployment in place particular you're doing something like continuous deployment plus feature toggles because it's really beneficial to get your things into production as quickly as possible because you find out if they're going to have any production problems more quickly if you can do that as long as it doesn't cause problems in your production ecosystem then it's really nice to get them out there as quickly as possible to make sure that everything works correctly so the advice is don't let changes build up release as soon as you can and preferably one at a time and preferably in a continuous deployment kind of world another common problem in this world of service discovery if you have ten thousand services running how can you find services to do stuff well fortunately there are tools that help you do this and in fact all of the really complex things in the micro services world have tools to make those things simpler zookeeper is kind of a general-purpose service discovery tool that is broader than just micro services but both console and HCD are both tools developed specifically for the micro service architecture space they will let you create both service to service registries and humane registries so that a human can read them and find out what's going on between your thousand or 10,000 services that are running there are service visualization tools Adrian Crockford who created a bunch of the cool stuff at Netflix has released this little open-source tool called spy go this is written in Google's go language these are tiny little simulations for example this is a web server and these are either database servers or application servers bound to them and this is a way to visualize moving infrastructure around and how things are connected to one another which is quite common in this micro service kind of space as I mentioned there's a lot of tools support needed in this world and this is a really fantastic resource that a bunch of my colleagues in India put together this thing called DevOps bookmarks calm this is open source tools for all the things in the in the web in the micro services space and you'll notice categories over here we were just talking about service discovery they're orchestration tools provisioning is about machine provisioning so you can see there are a bunch of those security and metrics and logging and monitoring process management virtualization and containers continuous integration and delivery and you can filter these guys by platform so this makes a really nice resource to find out is there some tool out there that can solve this nasty problem chances are pretty good that there is so as I mentioned before you really need a certain level of maturity before you can create this kind of architecture this gives you some interesting benefits but it has a pretty high bit of complexity into it to make sure that you understand that before you get into that kind of architecture Simon Browns a really good well-known architect in the UK and he makes an interesting observation here particularly for people who are migrating a model is to a micro service architecture and that is if you can't build a monolith what makes you think micro services are the answer because this is way more complicated than a monolith and in fact there is a hybrid between these I've mentioned this this morning this idea of a service based architecture you may be struck as I am about how starkly different service service-oriented and micro service architectures are they're almost the opposite of many ways and so what many people end up building now is a service based architecture and this is particularly if you're moving from a monolith into a service based world there are typically three changes from the micro service and two service based one of them is service granularity typically you have fatter services here still with a domain perspective but much larger in scope than what micro services are you tend to have shared database scope in a service based world this is this one it's too difficult and tangle all the transactional context here and so the service boundaries here may have just become transactional context boundaries and it's also common here to sneak in a little bit of an integration hub to varying degrees of sophistication here and and a service-based architecture and there's also a relatively easy way to do a migration from one to the other you start with a monolith you identify the domain boundaries you figure out which way to separate those out you physically separate them and then rinse the repeat and you can gradually migrate your way from a model if into more of a services based world the goal here is to partition along natural boundaries and I've mentioned a lot of these as we were talking along the way you can go structural with a fur and effort coupling you can go domain centric using domain driven design you can look at transactional context as a way to split things apart but the best advice here if you're doing a migration is build a small number of larger services first don't start right away with from going to a model if too teeny tiny little micro services there's a great case study about this at a real estate common I went to faster than that from one of my colleagues that talks about how they split teams up in this micro service architecture that's a good example of Conway's law last how continuous delivery impacts all this what you really want here are teams that are lowly coupled to each other delivering relatively independently into a common integration pipeline without fearing breaking each other's bills that's the goal of micro services plus these continuous delivery practices all together that's last I have and I think I am way over time I was supposed to start at stop at 1410 so I apologize for that thanks very much for coming and hope you enjoyed it
Info
Channel: Voxxed Days Vienna
Views: 35,879
Rating: undefined out of 5
Keywords: Voxxed, Microservices, VoxxedDays, Devflix, Voxxed Days, Voxxed Days Vienna, Vienna, Voxxed Vienna, Architecture
Id: pjN7CaGPFB4
Channel Id: undefined
Length: 57min 22sec (3442 seconds)
Published: Thu Mar 17 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.