Creating event-driven microservices: the why, how and what by Andrew Schofield

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] good afternoon my name is Andrew Scofield and I come from IBM's her sleep art laboratory in Hampshire and I'm just talk to you about creating invent driven micro services so most customers I talked to say we're really going to do a micro services architecture you know micro services of the way and they they really kind of do the whole thing properly but everyone's kind of talking about it and I'm not going to go into a lot of depth about what they actually mean I'm just wanting to kind of need a summary to start with so it's essentially a technique for structuring your application as a collection of little services and they're all kind of self-contained and they're loosely coupled talking over a network but the key thing is they're independently deployable scalable maintainable and testable so this is a rather silly kind of primary school kind of diagram but it kind of illustrates it quite well I think you've got all these different things kind of with their fingers in each other's objects essentially in the data store and what you're aiming for is essentially the thing on the right over there so all the green things are together all the red things and all the purple things and there are defined interfaces between these so this is kind of the monolithic world which is hard to kind of maintain and update and this kind of thing and evolve and then the the micro services world much much cleaner so all of the cloud services that we deploy in IBM at that kind of shape a lot of the products that we're selling these days are also that kind of shape because we tend to do things on you know orchestration environments like kubernetes and we will still use micro services just so we don't end up with a massive great blob of data and of course we're kind of selling middleware and then when you're building a business application it makes even more sense to do it like that but it's you know that style of building applications is really very very common now what people tend to do when they start with micro services is build rest interfaces around everything and then you end up with big long call chains and you know potentially very long latency yes you go from one micro service to another and even worse if you're calling out to something which has a different kind of maintenance window or something like that then you're used to in your own business then you kind of break the chain when you at the end so it's a good idea to build some kind of buffer some kind of a synchronicity I suppose into situations like that just to kind of make it all a bit more manageable now historically business applications tended to be quite data centric okay so they would put everything in databases they would start and so the data was kind of essentially at rest and you would back up your database you would you know the priority was to make sure that you always had a valid copy of your data and everything was kind of very locked down and transactional in nature and that sort of thing and your business was essentially in your database but that's a kind of very static way of looking at things and it doesn't really lend itself to responsive applications that are kind of give you information kind of in the moment so another way of looking at things is event-driven kind of event based essentially where the data is all in flight okay it's all flying around between the pieces of software and the first priority is that the system remains responsive now of course this is much more chaotic and it scares the hell out of people so you probably get a blend between these two things but if you wanted to build a kind of responsive application you kind of need some event based stuff in there sort of example imagine that you have a banking application and you you call an API saying submit this payment and it takes a bit of a time to actually complete the thing and it's not a synchronous call okay it says yes it's submitted so what you want to do do you want to poll and say as the payment complete is the payment complete is the payment complete or do you actually want an event coming back to tell you when it's done well certainly I would rather have the latter of those two I want you know a notification on my phone for example as soon as the things happened and I don't want it burning the battery of the thing while it's trying to work out where there's done so that's an example of a kind of responsive interaction which is event-driven so this is a kind of a very high level diagram of what a micro services application might look like if it has a combination of api's and event-driven concepts in at the same time so of course the tooling and the kind of ecosystem around REST API is and micro services very well developed with things like Sto and you know providing service discovery in ways to do a bee testing and and and and that kind of thing and in comparison the event-driven world is a bit less kind of mature I suppose but what you typically will have is something which I've called on here an event backbone so this is a messaging infrastructure you have in the middle of your application and it provides a way of wiring together your micro services and you'll typically use the publish/subscribe model for messaging I'll explain a bit more about what that means in a minute but it's kind of it's loosely coupled way of doing communication essentially so in some situations you'll have api's between your micro services and in some situations and micro service might publish invent and other micro services are subscribed to that event and they can respond as a result of it right so you can do different messaging patterns so we've got a one to two kind of communication here which is of course better than having kind of this one calling to rest API sequentially because if you want to change your topology with this you just have a subscription you don't need to modify the sender of the of the data now I think in practice you probably will build a micro services application using the combination of styles so a good way to think about and I think is to have the the kind of the big lumps of your application talking to each other with events and then within each of the micro services may be using REST API because they're really nice and kind of easy to reason about easy to code essentially so if you if you atomized everything into tiny little pieces of used messaging everywhere then I think you would find it was really rather difficult to him to maintain so I said I would explain a little bit more about publish/subscribe so messaging is a way of communicating between programs by sending messages okay so you have what I've called on on the chart here produces of messages and consumers of messages so the producer sends the message and the consumer receives it so there are essentially two very high-level models of doing this kind of communication the first one is called point-to-point so this is a JMS term from the Java messaging service probably 15 year old standard but this is this is how they divided an app and with a point-to-point way of thinking about things you use a queue as the intermediate kind of structure okay so the producer sends message on to a queue you end up with an ordered list and then the consumers become messages off but each message is consumed by a single consumer okay so this is a good way of sharing this set of messages but not really a good way of kind of spreading it out across multiple things that you wanted to process so the alternative way of thinking about it is called publish subscribe and instead of having a queue in the middle the thing is called a topic and in that situation you could have multiple subscriptions to the same topic and each subscription gets its own copy okay so in this case every message is processed once in that case because I've got two subscriptions it's processed twice and they could have a thousand of them or however many but it's a good way of kind of fanning out information and this is the loosely coupled way of doing it of course because I can add a subscription after I've built my application and kind of add a new consumer of the information and kind of enhance an application without changing that the producing side of things so how might I do this in practice so a very kind of popular way of doing publish subscribe message in the moment is an open source project called Apache Kafka okay so I expect most people heard of Apache Kafka if you haven't actually used it but it provides publish/subscribe messaging kind of at great scale but it does a little bit more than that right it's got support a very very good support for what's called stream processing and I'll talk a little bit more about that in a moment but this is a kind of an optimized way of processing sequences of events and it also has a very kind of strong storage infrastructure as well so most messaging systems have not particularly good support for an enormous amount of data within them they're very good at handling the data while it's kind of flying around but when it's kind of static and it's it's kind of getting a bit older then they don't really want it in there because it kind of needs indexing and that sort of thing whereas what Kafka does is it expects to have a lot of data behind it it doesn't know very much about it so it can kind of cheaply hold terabytes of data because it just essentially the positions of the application are stored rather than some kind of index about the messages so it makes it quite cheap to have a lot of data within it and some of them microservices patterns that you might use for event but a vendor of micro services and benefit from the fact you can have an enormous store out there essentially and you can see that the terms I was using earlier of producer and consumer they apply to Kafka as well so these are kind of general message processing terms now there are a few other things that you can do with Kafka as well it's got these idea of connectors so if you want to get events into your system from an existing system then you can kind of plug other systems into your Kafka and then generate events like that you don't have to kind of rewrite the world in order to do it and then there's a high level API for stream processing off to the side as well so let's start off by kind of introducing what the event backbone is is kind of four so in a very very simple case I've got a pair of micro services here one is a producer of events and it publishes an event on to the event backbone and there's a second microservices which is consuming it's got a subscription to the same topic and it gets sent the events so we've now got these two kind of loosely coupled and communicating and because this is asynchronous of course the execution rate of these things can be different okay you need to be able to process all of the messages in aggregate at roughly the same rate but if you have kind of peaks and troughs in rate on the top one then they're kind of unaffected they don't affect the the receiving microservice necessarily you can also perform maintenance on the receiving one without affecting the descending one right you can stop it temporarily upgrade to the next version restart it and it can just catch up because the messages are in the Indiamen backbone so I mentioned that you would have connectors as well to other systems so you may have other event sources so potentially you're um you're writing your new applications in micro service kind of styles and you have your existing enterprise IT systems if you can somehow feed the events out of those into the event backbone then you've got data that you can process for new applications so you could use for instance to change data capture pattern here which takes changes out of a database converts them into events so you get a sequence of create update delete update you delete create that kind of thing and then these are events which can be consumed by the micro services so it's quite a common way of kind of linking new and old another thing you could do a stream processing and I'll show it kind of an example of that in a minute so you would think of this is essentially a highly optimized application that might be one topic in one topic out and this could itself be thought of as a micro service okay now this is more tightly tied to the technology that is used to build the event backbone but it's still a nice self-contained piece of logic essentially people quite often also use some kind of event store or of an archive often off to the back of this as well so the idea is essentially that the event and backbone can hold a lot of data but it isn't kind of without limit you know you you wouldn't really want to store an enormous amount of data in it so um it's probably cheaper to push the older events off to some kind of cold storage you know some kind of object store or something like that so having a link out to that is it's quite a common way of doing it and then once you've done that you have a choice for historical processing whether you want to take it the data from the event backbone or actually to read the event archive instead and then the final kind of building block in here is well what about notifications so in the example I was talking about earlier with with my phone getting a notification when I finished a banking payment for example then this is exactly the kind of way it would work you'd have an application and before overthere subscribing to a topic an event turns up on the topic saying the payments complete and then issues a notification using one of the mobile phone notification networks you know APNs or something like that right so let's look at a very simple example of off stream processing now I was doing a talk to and somebody was in the in the audience from a banking kind of background and and they kind of raised their eyebrows when I did this but they they just told me they invented a new REST API which needed to be event-driven and it wasn't so I did exactly the same back at them right they ended up building polling into a brand new REST API which I'm seemed particularly insane but um let's imagine that you've got your existing transactional system okay and what you're going to do is you're going to build something that looks to suspicious transactions okay this isn't done kind of in you don't want to slow down all the existing systems you want some way of kind of doing off to the side and adding value without disturbing the existing systems so what you could do for example is you could take a feed of the transactions onto a topic in the event backbone okay and then you've got some kind of stream processing application which looks for suspicious payments so maybe this is a repeated payment when you wouldn't expect it to be normally or it's much bigger than you would normally expect or it comes from some kind of unusual country you as you don't normally pay or something like that something which is suspicious but you're putting the rules into this kind of application maybe about configuration so you can kind of tailor it over time and this is a stream processing application so it's optimized for going from kind of topic to topic so it will use you know techniques such as pipelining and maybe batching for transactions and that kind of thing in order to run more efficiently than just a regular application so what we're doing is we're taking a topic here with all the transactions and we're deriving another topic with only the suspicious ones okay so this kind of this is sort of duplicating data but you're kind of filtering on the way through and if you're doing this with Kafka then this can run really really very fast it can scale up in a very nice way so this is one of the reasons why Kafka is kind of good as the event backbone so now we've got the alerts we now need to tell the tell the user about them right so again you could use the stream processing application here and what this second one would do is it would look at the Preferences of the customer so if an alert learning transaction turns up for me for example I'd like I'd like an alert with push alert right rather than an SMS for example so it's got a table of customer preferences off to the side and then it republishes the events onto one of the three topics depending upon the notification preferences okay so again we've kind of gone for a big wide topic to a narrower topic then to three narrow other ones so this stream processing application has gone from one topic to three and that's perfectly fine it can it can do that kind of thing perfectly happily and then of course we have some kind of applications that subscribe to those topics and then they're either calling SMS API or sending an email or calling the push notification service for the mobile provider alright so that's a very very simple kind of way that you might build a stream processing micro-services application because these are still micro services right they've got a well-defined purpose and their interface is defined by the topic okay the the format of the message on the topic is going to be constant for every message on there and that's a bit like the interface to a REST API so um actually jumping into all of this for the first time is perhaps a little bit daunting you know what on earth should the events be for example well that's that's a pretty good question I think so um there are techniques for doing this and one of them is called events storming now I've got a link to an IBM web web page on there doesn't mean need to pay IBM for this this is something you do yourself it just happens we've kind of written up how to do it quite well so it's a good place to kind of go and and understand what it means but essentially it's a very kind of social activity so you might get kind of six or eight people into a into a small conference room or something like that um take out all the chairs so they all have a stand up and talk to each other give them a bunch of colored and post-its post-it notes because the colors matter right and get everyone to kind of interact and write essentially because this should be a kind of an opportunity for people to kind of get all their thoughts out there in a kind of non confrontational way and and try and build a view of the system so what you're trying to do is essentially work out what the events in the system are who the actors in the system are and then what the commands that the actors are actually going to perform and then which data they need essentially so you might for instance line up all the events in order say you know first this happens and then this happens and then that happens and you'll build a kind of a time-based picture of the way the system works and then you would eventually tease out the way that this that this all hangs together now you may we'll end up with far more kind of events than you would expect having done this and you know far too many kind of business objects essentially so you might go back in the other direction at that point and kind of aggregate them right because you don't necessarily want a million different things when actually a thousand would be more efficient so you would tend to reallocate things into more kind of manage chunks after you've done that and then at the end of the day what you've got is you've got a set of potential micro-services with the events that they want to respond to and then you can kind of start implementing little pieces of the of the infrastructure because I think you know you could do a big bang kind of deployment but it's probably a rather more sensible to do them something in a measured way to start with so here's an example which is some a little more kind of realistic so you can see the code if you follow that URL down the bottom there so there is actually running code behind this it's just kind of an example of how the application would work and the idea is that this is a kind of a shipping container system okay so you've got boats that you want to fill up with them with orders and then kind of send them across the seas to deliver things to the receiving customers and of course you want to kind of fill up a boat and sometimes you might try to put things on which won't fit or you know something like that so it is kind of event based and there's a set of micro services in here as well right so for example there's an orders one and there's a voyages one and there's a containers one and this is kind of the flow of the events has to go through the system kind of the way that the the orders progress so what might this look like in practice so for example I've got four micro services as a result of that and I've just colored them differently to make it um easy to see what's going on and a couple of them have REST API because these may well have a kind of user interface on the front of them which you need to poke when somebody presses a button right so an API is quite a good way to kind of start that but it may result a flood of events from that point onwards and they need some kind of storage behind them potentially now this might just be in memory but it's quite common that you don't actually have some kind of micro service-based store behind it as well and then if you're doing it an event-driven way then I imagine you would actually have one topic for each of the micro services as well if I'm more complicated than that but in the simplest you know kind of way I think you'd have micro service some kind of storage and then some kind of topic behind it as well but there may also be more kind of derived ones so for example maybe we've got some some piece of analytic software which looks at ships of containers and goes well this just doesn't make sense and then it raises a problem with another topic so this is an example of streaming analytics subscribing to both of these and providing some kind of join capability and then raising your less so this is some this is fine but I'm sure you can see that there might be some kind of complexity to I'm breaking up the world like this so you definitely won't lose coupling okay because you want to be able to stop and start and evolve things independently so this is this is a really nice capability and using publish/subscribe messaging is a really good way to do that but there are some more complicated things to achieve with this model so one of them is data consistency so in a micro-services application you often find that the world is or the data is handled in an eventually consistent way okay so by that I mean that you kind of start making an update to a piece of data and actually it kind of trickles through the system gradually and eventually everything is consistent but you kind of have to wait for it to happen and that model is certainly true for lots of the kind of the cloud firsts data stores you know they kind of replicate a synchronicity between the nodes giving you availability and performance but at the cost of some kind of data integrity and also if you're used to monolithic applications they tend to be properly transactional right everything is done in some kind of you know transaction monitor or application server and you tie your databases to queues together and you know you do two-phase commit and this this kind of thing to make sure everything is totally entirely consistent but the problem with that is it doesn't really scale and you've of course built a monolith so when you start breaking everything up then you do end up with data consistency issues so how are you going to manage data consistency in this world and then the final one is well what about efficient queries you know I've published all these events not to a topic how do I find out how do I query a piece of data out of this well you if you do use an entirely rent based way you start at the front of the topic and you hop along it until you find the data and you go I've got it but the more data goes in and the less efficient that goes so you kind of need other ways to make this sensible essentially so you could invent everything from scratch and you know good luck with them it's going to take quite a bit of time and you're going to make lots of missteps or you could try using some patterns people have already tried and then you know learn them well and then and then kind of apply them properly so I'm going to go through a few of the patterns which which we kind of see and recommend here just to kind of illustrate ways that you could do this so the first one and this is not necessarily just for event-driven microservices is database / service okay so I sometimes see people who've said well we're going to a micro services architecture of course everyone who's using the central database and of course they haven't gone microsomes at all all they've done is they've kind of introduced a bit more chaos and everyone that's still dependent on the database schema so I modified the database schema I modify every application at the same time and this is just you know ridiculous you haven't really made any progress at that point in time so a better way of doing it is to give each micro-services own store of data okay now it could be a database of its own it could be something like MongoDB it could just be some kind of in memory table something like that but still something which you'd be kind of query now if you're using Kafka there's another thing you can do so there's this concept in Kafka called stream table duality and what that essentially means is if you have a topic which has kind of key value data on it you can treat it like a table and run queries against it so if you're an entirely Kafka based kind of shop then using cache go streams and K tables they're called is potentially one way to do this so that would be one way to implement a kind of a little store using particularly Kafka so doing this is kind of fine but of course what you've done is you've you've said that well I only have to change one microservice when I change the schema of this so this is very good but of course you've kind of how do I get consistency across these things because I've kind of taken my own copy of the data and if I make a kind of a change in this one I need a change in that one how do I actually achieve this well that's one of the problems in here and and the schema sorry the saga patent is one way you could potentially address that so Casca actually is it helpful in another way in this environment and that is if you're trying to make these kind of copies of the database and have each microservice have its own kind of derived copy of some kind of central database you can actually do this kind of filtering a data replication job for you so this is using the change data capture pattern so we've got what are called here a source change lock so this could be the recovery from a relational database or some of the other stores actually have ways of publishing out changes when they get made and they're read by a messaging producer application so this is an application producing events on to the event backbone and then it publishes the events on to the backbone and I've called this a topic partition it'll be clear why I've called it a partition element but this is essentially a topic and these are key value-based messages now so the key would be equivalent to a primary key in a database table and the value is all of the row data okay and then they're written in order in that direction so this is the first piece of data for key a and it is actually superseded by a two so this is not a delta that's maybe a creation and that's an update but it replaces this one here okay and then key B I've got an addition original value and then I've got a null value here so that's equivalent to a deletion of a row okay and then for C original value and then new value and then I have a consuming application down there in orange and what it's doing of course is it's reading all of these events and it's making its own derive table so maybe there's 15 columns in the table over here and we only care about two of them are over here so what we do of course is we crunch the data and write it onto our own local store and then we've got a private copy of the data which is used by this orange micro service alone now of course it's asynchronous okay so this is being updated transactionally and then it's going through the event backbone and that out into the consumer and it's kind of eventually going to be taking the data so you have an eventually consistent store but one which gives you the property that it's kind of independent and that and the format of it is it's not going to be perturbed by changing people changing things up here and a couple other things about this this keeps up to date automatically okay it's an evolving view of the store because when this changes again another event it turns up on here and it's consumed by the consumer application which modifies its store so you've got an evolving copy of the store naturally because it's a kind of publish/subscribe messaging application and of course the final thing is because it's pub/sub I could have 10 of these things off the back each taking their own copy and each doing their own processing to make something which is subtly different but tailored to the particular microservice now this is fine but of course this thing grows without limit okay so it will soon take up all of the discs in the universe we don't really want that so what can we do to actually stop this from happening and again this is why cats particularly useful in this case it's because it has this idea called log compaction so if you have one of these key value based topics then you can tell it to periodically kind of crush it down on what you need of course is you need the latest value of each key only because a one was superseded by a two you no longer need a one once a two is written so you can just eliminate the others so you end up with a kind of a topic with gaps in it which exactly matches to the table over there so this is kind of an automatic thing that CAF can do if the number of keys gets ridiculously large like in the millions it's not suitable anymore but if you're in you know a hundred thousand kind of range then it's entirely acceptable to do that okay right so another pattern it's called sagas so the idea of a saga comes from I think probably the 1990s it's a kind of originally for kind of more exotic transaction processing so the idea is you've got a sequence of transactions that provide some kind of bigger higher level business purpose okay and each of the pieces is a separate transaction that is entirely atomic and you chain them together and if you get a failure in it you kind of run compensation logic to go backwards and undo the effects of the first thing so you're building something which maybe lasts rather longer and rather more complicated and this kind of idea has been you know kind of borrowed by people building event-driven micro services so what you're trying to do is you're trying to orchestrate a kind of a set of multiple steps across microservices so that eventually you complete the entire job alright and of course this is eventually consistent again and in some ways kind of a little bit scary so I'll just kind of explain what the diagram is showing and then and then I can kind of explain how people actually achieve it in practice so the orders microservice has published a message on - on to its topic which is message number one and the containers fleet and voyagers micro-services are all subscribing to that topic and so it might be for example or to create it and the other three micro-services leap into life and they go well I've got to do some work now so they go off and do it so the containers micro service doesn't does some work and then when it's finished it it publishes a message saying I've done mine and then of course the fleet one of the voyagers one do the same and the orders of micro service is subscribing to all of these ok and when it gets a message from each of them then it makes a state change to say well now the order is complete ok so that's kind of how it works in the happy path now if something goes horribly wrong then it will get more difficult you might need to back out so you might impractical timeouts or something like that in there as well um now people will typically not always but they will typically build this orchestration logic actually into the micro services so this is kind of a little bit complicated and a little bit kind of fragile perhaps but this is still the way that people tend to do it right so they will build a subscription to the containers micro servers into the orders one and then they'll say well when this fires an event then I take some action in here ok another way to do it is to use some kind of workflow engine ok so there are workflow engines being built on standards such as and B power and BP hmmm for example where you're you're describing in some kind of specification document a multi-step process and then each of the pieces would be kind of one of these that's another way of doing it but I don't see people doing that and micro services typically if they already have skills in that and then maybe they would otherwise they tend to build it the code straight into their applications instead so another pattern which people commonly quite quite commonly use or I think probably more correct to say they talk about it quite commonly it's called event sourcing okay so this is a simple example of an event sourcing so what you do is you essentially say every change to one of my objects becomes an event and I write it into the event backbone okay so then I end up with kind of a history or an audit my entire business in terms of the events which were which happened and then if I replay all of the events in order starting from the beginning of time of course I have recreated in my business now the difficulty is of course the longer the businesses exists the longer it takes to replay them so it kind of grows without limit so there are complexities to do it in doing it in practice because of the kind of the space it takes and that kind of thing but as a kind of a model it's really nice and very clean so for example I've got the user here calling an API which is handled by a command handler and the command handler then publishes events so at the order one which is created and then updated and then complete on order two which is created and then cancelled an order 3 which was created updated and then complete and then I can have some other component maybe part of the same micro service some kind of macro micro service perhaps which is processing these events and then making some kind of rolling them up somehow into a way that is easier to query because of course if I'm just using the topic and I say what's the status of order one I have to read all the events so what I really would like is I'd like to kind of roll them up into some kind of a more kind of consumable way now in order to kind of get atomicity in here this is kind of a little bit tricky interesting I suppose so what people typically do and this is kind of a convention that the application designer typically chooses the normal way to do it is to say publishing the event is an atomic action okay so each event at a time is one kind of unit of atomicity and then the downstream processing of it doesn't need to be atomic at this point you essentially the processing performed on the query handler is idempotent okay so it can do it again again again so what it can do is it can be reading these events and if it crashes and then restarts its position is still back here so it can kind of have another go and it only actually moves forward it's current that you know it's kind of restart position when it's sure it's completed what was over there and in the unlikely event that you lose that store you can of course reprocess all the events at that point now I mentioned earlier about this idea of having an event archive or an event store after the side of the backbone well that would be one situation where potentially you wanted to keep the entire audit log forever so putting it on to cold storage and only keeping the more recent ones in the event backbone it's probably a good way of doing this another way people handle this is periodically they take a checkpoint okay so what they do is they kind of roll up the current state and then write that as an event and then you can throw away all of the previous ones but of course if you wanted an audit log you've lost that ability but you have gained some kind of efficiency as a result of doing it like that now this is another situation where the way that Kafka works is really a very good fit for this okay so it has this way of being able to store data for a long time so that's nice but it also has a way of distributing the workload across multiple applications that fits very well with this so um excuse me a second I even realized that I've got my animations slightly wrong all right I'll fix that maybe so what I've got here is I've got three separate partitions of a topic okay so these are one topic but with three separate lists messages okay this is just one of the things that calf can do so this enables you to run a topic across three servers simultaneously and maintain order and what you may notice here is that the order number 3 3 3 3 all of these guys are in the same in the same partition okay and we've got no kind of crossover where you end up with what order big entry of these things so what that means is if I can process one of these lists I'm going to get a consistent view of a particular order okay and if I process another one then I get a consistent view of those and I can spread out the workload across multiple consumers in order to make it go faster okay so we've got ordering and scaling because of the way the catechol works okay so for example I would I get comes the top consumer to process that partition and the second consumer to process that one and the third consume to process that one okay and because this is publish/subscribe again I can I can actually have two different processing kind of sets called consumer groups processing every event so this might be two different microservices right it could be the orders one and the and the fleet one for example right and there's no requirement here that I have exactly the same number of consumers as I have partitions if I have fewer consumers and partitions I just kind of get one consumer processing multiple of them but I've got workload balancing and all vary in the same time which is a really nice feature so the final pattern I'm going to talk about really rather quickly because this one is hideously complicated to achieve in practice and it definitely is true that more people talking bound to then achieve it and that's called CQRS okay so the idea is essentially that you divide the application if it's you know a crud application you do the the mutating operation so they're creating the update on the delete using one side which is called the right model and you do the reads on the other side which is called the read model okay so this is optimized for writing very quickly and they read up to my store may be really quite different okay so it probably has an index on it for example and it may actually join pieces of data to provide some kind of query across various entities which kind of answer a business question rather more efficiently so I've seen somebody do this not exactly like this but definitely build a store off to the side which was for running they're kind of once in a lifetime or once in a blue moon kind of queries but in practice this is really quite difficult to achieve kind of maintaining consistency or sufficient consistency between these two things is really pretty difficult so this is definitely one where people go well yeah this would be really cool but um it's really a little bit hard so I think what I've done is I've shown you that um event-driven microservices give you a an effective way to build loosely coupled applications okay just by making the communication asynchronous for the kind of the big lumps of this means that you get the ability to you know perform maintenance on these things independently and scale them independent and you get all the benefits and micro services and the fact is asynchronous means that it's kind of loosely coupled now if you wanted to actually do this in practice then use a technique like event storming to get started okay it for one thing it's going to be kind of a fun morning um it's you know Council's work you get paid for it but you're standing up behaving like children so this is kind of a good thing in my book and then if you were to try and doing this then use patterns like event sourcing and sagas to build on things people already found to be effective and and make make it kind of manageable rather than kind of chaotic so um are there any questions I find it quite hard to see past the lights actually I don't know how this works do we have a microphone that people wander around with or do I just have to walk around and listen okay right what's the question by there okay all right thank you [Applause] [Music]
Info
Channel: Devoxx
Views: 61,529
Rating: 4.9550171 out of 5
Keywords: DevoxxUK, DevoxxUK2019
Id: ksRCq0BJef8
Channel Id: undefined
Length: 39min 1sec (2341 seconds)
Published: Thu May 16 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.