Mistakes made adopting event sourcing (and how we recovered) - Nat Pryce - DDD Europe 2020

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] thank you hello so what I'm going to be talking about today is an experience report really about the our adoption in the company that I worked for of event sourcing as an architecture and the mistakes we made so we were building a brand new system we adopted event saw saying we were new to it we made a number of mistakes some of them really obvious some of them less so and so what I hope is that I can share those mistakes today and those of you who are new heir to advanced all soon as well will be able to learn from our failures and avoid them but we were able to recover from those mistakes quite easily and reflecting on that uncovered some some useful lessons learned as well about about our successes I'm hopefully you'll be have to learn from our successes as well it's not all bad news I was working as an architect and and a technical lead of a business area in a company that does scientific publishing and we were building new systems for the editorial workflows and peer review of open access science content so the editorial process can be summarized at a very high level like this researchers do some research in academia Industrial Research Laboratories they uncovered some scientific new scientific learnings and they want to publish that to the wider scientific community you having problems they submit there they write up their findings and submit the article and data to a publisher such as the one I was working for them the publisher then finds peer reviewers also in the scientific community they review the science decide what changes need to be made to it right scientific publishing scientists do some research they write it up as an article to be published in a journal they send it to a scientific publisher the publisher decides whether it's fit for their journal they send out to other scientists to review goes through rounds of revision until the peer reviewers agree that the science is acceptable it then gets typeset and published and then researchers read it and then it feeds back into the ongoing scientific discourse maybe the it's not fit for publish it gets rejected or maybe it's not fit for that particular journal in which case it'll get transferred to another journal and there's the opportunity to submit to that journal okay so it turns out that not all scientists are always entirely honest and so a significant part of the publishing process is to detect scientific fraud so early on during submission we might try and catch plagiarism or double submissions where paper has already been submitted to another journal but some scientists can actually form more sophisticated fraud rings where they agree to review each other's papers favorably and so on and so there the data that comes out of our editorial systems is fed into another part of the business that perform what they call research integrity analysis where they look at the patterns of data of submissions and reviews trying to detect potentially fraudulent reviewer and author rings so that was our system we were building that the bits wasn't dark blue there was the areas of the business that we our system was responsible for supporting and we would have to feed data into this research integrity analysis and downstream to publishing so this is a very linear view of of what publishing is but actually these activities happen at many times in sort of multiple feedback cycles people can provide data at different points in time and the systems we had the legacy systems took a very linear view and and that was found to be quite a limiting factor so we wanted to build a system that would not be so linear where the people using them wouldn't have to maintain a lot of data outside the system in spreadsheets and things like that in order to have it at hand when the system then allowed them to enter it for example and so we tried different ways of presenting the business and a technical solution different metaphors physical and cognitive metaphors to the business to product to technology in order to create different ways of thinking about the problem and in visiting how the system should behave the one that we found very useful when talking with product owners and product directors was the idea of containment and facing and so we had the idea that the core model of our bounded context the which is basically the state of articles going through peer review as they get moved from journal to journal and eventually get published is a sort of the core of the system and access to that and changes to are mediated by peripheral applications and those peripheral applications face out to different segments of our user base and I have user experiences that are tailored to their specific needs and constraints so somebody who is a journal editor is looking at hundreds of manuscripts at once needs a very different view from someone who's an author or reviewer who comes to our systems maybe once or twice a year at most and needs a lot more guidance and a lot more lot less data sort of in their face at once so each component of our systems are facing out to different users is is used by that user to input data the acts as a scratchpad input data and when the data has been collected it gets submitted to the cause of command and come and then update the state the articles and can invoke other applications to trigger workflows that wouldn't need to be done by other people who are involved in the overall process each of these systems could fail independently will be taken down for operational reasons for each other saying database and we were using rest to integrate them and then we wanted to attach provenance to the activities that was being performed by these different users that is we wanted to know who was doing it how do we know who they were where were they when they did it when did they do it and so on and we wanted to be able to record this over time so that we could look for on capture this data and look for suspicious patterns of provenance in the changes that happened to the articles that were being reviewed that led us to picking an event sourced architecture so each of the peripheral applications was used to capture information and then submit a command to the core of the system the command would be executed and recalled what happened as events the current state the article would always be rehydrated from the history of events I applied to that article and then each of the events could have the provenance associated with it so that we could then do analysis on that for a later date and the event sourcing architecture we thought would force us to be honest about the current state of the article and how it related to the history that it had gone through we couldn't do an audit log that later could not be used to recreate the criteria the article we would always have to ensure that the event log when history was was an accurate reflection of the current state we pixel very boring technologies as in fact boring technology is is a meme and there's a website that campaigns for it so we were building on Cloud Foundry as a cloud platform with an internal cloud and cloud in GCP so we had a common layer above that we were writing on the JVM in Kotlin that's probably our most exciting technology choice we were writing very lightweight services using a framework for HTTP 4k that allowed a very functional programming style of writing HTTP and we were using Postgres to store our data post grades because we had a lot of operational expertise in-house with running post Gray's point in time recovery high availability just just the general work of keeping Postgres up and running effectively and so that's why we picked post grows rather than a dedicated event store so we were new to writing an eventual system and we did read papers books or what videos we rare read experience reports but we still made some rookie mistakes so let's go through some of the mistakes we made we were seduced by the dream of scalability and and we an eventual consistency so we me I wrote an HTTP service for storing and receiving events which meant although that each event was being put into the store transactionally anything that wanted to process events was not transactional and that obvious obvious failure modes where maybe our processing of an entity would create duplicate events yeah stupid mistake our system didn't really need huge volume didn't really need consistency what what our constraints were really was trustworthiness robustness and and consistency and so this was like an architectural error that that causes pain later our command processes maintained a current view of entity so the current state of the entity was calculated from the events persisted in a database that allowed the application to select and filter our sets of events that users needed to look at so an editor of one journal would want to see what needed to you know which articles needed processing for their journal and other an editor of a different journal would want to see for their journal or not both so we had a relational model of the current current state of articles in play that allowed easy selection of and filtering and prioritization of articles however rather than creating that snapshot projection from events as they were read as would be a canonical architecture the command processes maintain that current view at the same time as recording events so yeah that that didn't work very well we had the current store had to be was relational the events were blobs of JSON in a table and Joe in post grades and so the migration effort of keeping those two different models up to date became an overhead during development I think one of our most significant mistakes is relates back to what he was saying during the keynote is we we got confused many times between event-driven an event sourced so in an event-driven architecture events are passed but they're quite transient and passed around between components code reacts to them and and makes state changes and invokes behavior maybe in systems outside our transaction boundaries like email sending while in an event source architecture the current state of an entity in your bounded context is derived from the history of all the events that have happened to it in the past quite different but if you start trying to trigger behavior from the events in the event sourced dye store the event history store the mistake we made you start finding that you have to know what you're trying to do when you actually pass it holding those events in to the current state are you meant to be reacting to them in order to invoke some behavior or are you just reconstituting the current state of the entity we noticed that code had flags to say I'm currently reconstituting an entity or I'm sort of live and and the domain logic started getting confused and also started getting full of information about our technical choices right whether we had an event source architecture or not and that became like more and more difficult luckily quite easy to spot we had put notifications into the event history service to allow remote components to be notified when new events were added so they could pull events and update a local projection which was useful for CQRS but had the end result of people using the event bus the event history as an event bus so that exacerbated the confusion between event sourcing and event-driven architectures and started being used for transient events that had no real relationship to the business we noticed that the business users were asking us to filter events out of the view of the history of the articles that they were interested in because they made no sense to them red flag so we had events in there like tried to send an email to Bob but it failed with an i/o exception very technical events that were used to trigger behavior in the system and weren't recording interesting informant information about the business process and how it had what had happened as would be interesting to our business users so a number of mistakes were made we noticed them in two forums we had morning tea English so the team would gather in the kitchen every morning for tea we would sit always have pads of a3 paper and pencils and over tea would talk about how the project was going and any concerns we might have among other things yeah we're quite sociable and so when when sort of issues would be raised up we would brainstorm it with on the paper think about different ideas and be able to jump on them very quickly so even if you know even if it took a while for the issues to sort of fill typically within one or two days be able to discuss it within the team and come up with a solution we would call that as an architecture decision record or make that decision and then implement it so we wouldn't get dig our hole too deep for changes that were problems that were very obvious in our day-to-day development we also but we were quite big program of work so mode for teams working on this and we had regular inter team to architecture discussions once a week that we called the wizard gamut after Harry Potter and so we would get together with people from different teams we get together and discuss larger architectural concerns typically you know about integration between different parts of their systems but also about some of the sort of philosophical what is the best way of doing events or saying are we doing it consistently we're seeing this problem and it's caused us this because the teams were moving at different rates depending on what they're working on some people have got further in their journey to through event sourcing and adopting the event sourcing architecture than others and we could discuss like this or trade offs and that we were finding we had to make in a bigger team and share that knowledge around throughout the whole department and and certainly the what the one that caused the most I guess consternation was the storing of the current state in our command processes so look if you hear about you know when you first see that imagine what if you're thinking haven't you just missed the entire point of event sourcing there surely you should just be generating events in your command handlers and not maintaining this this what should be a projection from your event history and yes yeah that was the conclusion we came to we had quite big teams well small teams but quite a lot of us in different teams and we and we and a lot of people came from quite different technical backgrounds so some people like rest some people like to event driven architecture some people like relational modeling some people like putting JSON blobs into document stores some people like serializable transactions because it makes it easy to think about things other people were happy with eventual consistency some people like functional programming and other people were more keen on object orientation and the hexagon architecture so we were finding a development design culture that allowed people you know to balance all these different backgrounds and come up with something that the team could work comfortably in okay so there was creative tension and and and that created the design that maybe wasn't entirely by any specific book but was a balance between you know doing things by different books and and and and maybe as a sort of team lead for that whole business area I was being a little Machiavellian I could see that the architecture was not evolving into a canonical eventsource architecture but you know I wanted to see what happened right just because just because this book says this is the right way to do things there's plenty of other books that say other things are the right way to do things so we're going down a particular route we're balancing trade you know preferences of you know a number of engineers who are all inexperienced skillful engineers you know they've got good judgment let's just see what happens and and you know maybe this is a good trade-off between different ways of doing things or maybe it's not I also wanted to build an intuition for everyone in the department myself included about you know to build engineering intuition about event sourcing because it was so new to us where does it become difficult where are the edges where this particular architecture can of course difficulties where's a sweet spot where it's really paying off I wanted to build that and sometimes I'm making mistakes and then recovering from them is a great way of actually learning those intuitions so we continued that way for a bit and until the the drawbacks of the mistakes we had made became you know basically you so apparent we couldn't really deny them anymore and and then we decided you know where should we go back to a canonical architecture and which parts of like our hybrid architecture are worth keeping so what did we decide to do so the first thing we wanted to do was address this confusion between event sourcing and event-driven architecture the word event is used for so many different things and in our systems themselves we used the word event for many different concepts that were actually very distinct we had monitoring events that was fired from our application into our active application monitoring infrastructure we had events that were sent on rabbitmq message queues we had events in our event history we had user interface events in our JavaScript so the the word event was being used for things that needed to be treated very differently and so we we've made a concerted effort to change our terminology when talking about events that were put into the event source target event history and so we started we went through a number of attempts to come up with better terminology we tried saying this event versus monitoring event but really I think it's a word event that was causing us problems before just before I left people using the term historical fact for something that was in there the events tour and historical records not events tour in order to prevent people getting confused and using the events tour as a communication mechanism to trigger activity in the system because we noticed that every time someone new joiners the project they would end up making the same mistake getting the same the same confusion and saying wouldn't it be easier if we like just triggered activity from the events in the events tour and we would have to take them through the same learning process that we've been through so we we started changing the terminology to avoid that but once you've got you know a code base under continual development that's got the word event written all over it that takes a bit of time so yeah I'm not sure it was something that we found very difficult to do and I don't think we ever fully succeeded we once we decided not to use our event history as a message bus we went back to using rest and HTTP for integration as we had been doing in from the start and and particularly we learnt very heavily on the use of hyper media in rest architecture which allowed us to write all of our peripheral applications and the cause as context independent components anyone any component in invoking activity in another component would have to pass a set of links to that component to say we're data and notifications would have to be sent back too and that allowed us to easily integrate our new components in with our legacy systems they didn't know whether they were part of a new or old system they only reacted to the two requests by following the links that they received and we could invoke our systems from test tools as easily as from the rest of the system and also we could it made it much easier to decouple release Cadence's because we could evolve the on the wire formats the different components were using to communicate and use HTTP content type negotiation to choose the specific format for any communication between pairs of components and and then components could sort of be deployed and evolve their their their formats independently we had a convention that you never get more than one or two versions behind and therefore we could like deploy all these different components at different rates across the organization without having to do lockstep deployments what we found with like trying to use events or store event-driven architecture for integration is that you can't really negotiate the format of an event once it's been recorded in a message queue or in a event history as we made the mistake of while direct integration allows us to negotiate those formats on-the-fly dynamically we made command handlers transactional kind of obvious thing to do but so we connected our command handlers directly to the event store database and ran transactions we use the HTTP service only for reading our events so that other applications could could build projections and and for feeding the data up into Google cloud for analytics we also found it was really useful to have that read-only view for hack days developers would use it to gather data visualizer and those visualizations turned out to be really useful and popular and someone got folded back into the product that's cool then the biggest change that we had to make was to remove the maintenance of the current state from the command handlers and we turned it into a read through cache so this is like moving from our hybrid clunky architecture into our into a canonical event soft architecture so we would treat the current state as a snapshot every time a component wanted to read it would grab the current state from the snapshot it would read in any events that needed to be applied to bring it up to date he would performance business logic it would then save any events and then in a in a different transaction it would save its latest up-to-date snapshot of the entity back to the current state but only if its view was was a newer than the one that was in the database that allowed a bit of concurrency so that multiple commands could be running in parallel on the same entity and if they ever took each other you'd always just keep the saver the newest one sort of like a compare and set and then over time we removed the relational model and just stored Jason blobs for the current state we were a number of reasons for that we had existing infrastructure code and working practices for evolving the persistent data in a persistent JSON data databases concurrently with live transactions so we could evolve our database schema or do you know stored data structures without having to pause the system or without having long-running transactions that might cause an undesirable pause of the system and and we had lots of like code already there for serializing Jason structures and Postgres we learnt heavily on post grows Postgres can store Jason effectively efficiently and it can index it can index on arbitrary properties inside your Jason so we didn't need to denormalize our data into relational columns just for indexing we just indexed directly on the adjacent data so that we still efficiently pull out rows from that table I think some of our components eventually moved to building their caches concurrently rather than as a read-through cache just just because they were working with larger chunks of data but most of our systems that were built on a relay and I rent source model also we made the mistakes in the first one we may slightly different mistakes in the later ones but nothing is drastic and so as we added different as different systems that we bill were also built on the event source model we all ended up basically doing it like this so how did we recover so we were especially the change of the maintenance of the current state we were a bit worried about the amount of time that it would take but it took one of our developers about an hour to switch the system architecture over from like this of hybrid clunky way we were doing it to a colonic eventsource to architecture we then ran the the two implementations in parallel for a bit just to make sure that we were we hadn't changed the behavior of the system we hadn't and then we deleted the old one that was actually we were really surprised how easy that was and so the next question you know we asked was like why was it so easy to switch over quite a significant architectural change in such a short amount of time we'd made some mistakes in architecture but we must have got some things right ok I think that one of the key architectural decisions that allowed this to be done so easily was that we'd built a system on the hexagonal architecture in the hexagonal architecture that the main model and business logic of the system is implemented in an in in the abstract independently of any technical underpinnings that allow that system to be connected to real-world technical infrastructure so we had a functional model of articles authors so on and an oo model of how we find articles in our database and and the the commands that are being passed in to invoke changes on this on those articles all the interfaces by which the outside world can effect change and how the business logic wants to do then cause cause notifications to go out or data to be persisted are defined as Astra interfaces in our domain model we have an adapter layer that sees of middle hexagon of the hexagon architecture which Maps those abstract interfaces on to technical infrastructure such as the database or HTTP routing and then we have technical infrastructure such as the HTTP 4k library or very P routing or Postgres and jdb PC for the database so the abstraction that was used to get the current state of an entity in our system was completely abstracted away from how that current state was maintained and so it was a short you know very quick job just to change the algorithm there and and it just plugged into the business logic very easily without any changes we had extensive automated tests had extensive functional tests that tested that the system I met the business needs of our users we had a lot of unit tests around specific algorithms and about the integration with the infrastructure in the adapter layer but it was a functional test that really paid off and we wrote our functional tests entirely in terms of the domain model they had no technical details in them they didn't have any details of the user interface they didn't have any details about databases or HTTP or JSON or XML or URL anything like this they were modeled in in a domain model and that allowed us to easily change the infrastructure and then use exactly the same test code there's no changes to the test code required in order to rerun the tests against the system and that gave us a lot of confidence that the changes we've made to the architecture were not affecting the essential behavior of the system we integrated and deployed continuously so all that all the commits that developers were making were committed straight to master they then went through a suite of extensive tests that would they'd get deployed into a test environment and then our integration with the cloud environment would then be tested and and then they would get pushed to live if all the tests pass and releasing them was controlled by feature flagging and each component the compatibility between them we tested with consumer-driven compatibility tests that were build infrastructure that was provided by engineering support team in the organization each each component its repository you could declare the dependencies on other components and your client tests of that component would be run in the service components pipeline so that allowed us to write very focused tests around integration between different parts of the architecture and then our functional tests would say well yep they can talk to each other and the overall composition of them is doing the thing that our business users need and you know if you're if you think that every commit that you might push will go straight into production you have to have good test coverage so that also fed back into having good test coverage when we wanted to make architectural changes that supported as architectural change and those automated functional tests because they were written purely in terms of the domain model then in four just our use of the hexagon architecture so we were using a pattern for our test called the screenplay pattern which directly models the different users of your system in your tests and we we used that model of users to also abstract away from how the test is going to interface with the behavior that is trying to test so if we had an actor representing one of our users and author say or an editor that actor would be playing the role of an author say and it's and its API will be defined by an abstract interface and that interface would have commands and assertions on it that would authorize a paper I can see my paper I can change the title that kind of thing and then we would have implementations of that the interface with the system at different scales or different interfaces so we could interface directly against a domain model in memory and run all our functional tests very fast and also very reliably but only testing the the function or the business logic and not how that business logic might be affected by the technical infrastructure that were deploying on the same test code could be run in in an HTTP configuration where the actors were integrating with the system service interfaces HTTP and Jason and then we could run that against the system that had been spun up in in on the desktop or in a cloud environment so we could like have a cloud environment that had more concurrency and also included caching and things like that and that would then catch errors where we were getting our cache headers wrong and they were incompatible with our cloud infrastructure we could put a web browser in there and we will be testing exactly the same business logic but the the actor would now be interacting through selenium and the browser with the user interface itself so if we'd made some errors in our job screw tour html/css will be able to detect that and we could run these tests also against production so we built our system to be testable in production and we could then spin up tests run them against a production environment with all the events all the commands flowing in would have a test flag set and the events that would get sword would be flagged as tests they wouldn't be visible to real users but our testers could go in and see the effect on the system and we could use this to also see whether our system was still compatible with the CDN right but the key point here was that all these tests were testing the same business logic that little hexagon you know in like that's replicated across the world in different data centers is the same hexagon that we're running in in j-unit in-memory in our test suite in our IDE and so the infrastructure doesn't add any extra business logic but it can stop the business logic bringing vocals correctly but what it also means is that because these tests have to be written to be able to run directly against the main model in memory you can't put any business logic anywhere else but in your domain model and we would have constant discussions in the team about is it worth maintaining this additional configuration of an in-memory test because well it's like we've got HTTP what browser we put in memory if we could just get rid of one of them we'd only have to 13 you might have test infrastructure to look after every time people were complaining that it was - it was difficult to write a test directly in memory we would look into it and we would find oh that's because we put business logic in the wrong place you've got business logic in the HTTP routing layer or you've got business logic in your persistence layer we'd refactor the code to move the business logic into the domain model and represent in the domain model a more accurate reflection of what our system is doing and then the difficulties of maintaining that in memory test configuration dissipated so eventually we we were quite happy doing this using it as an architect or constraint and the architectural constraint really paid off when we had to make big architectural changes switching out our miss our misuse of the event sourcing architecture for something that was more canonical our tests therefore were we're ready to support us and all our business logic was like separated from the technical code that we were re-engineering so that gave us those tests gave us a lot of flexibility for quite large scale or technical architectural changes so lessons learnt three I think thanks to this experience we we did gain a much deeper intuition to ative understanding of the of the trade offs sweet spots difficulties that are inherent you know in an event sourced architecture but also gave us a good understanding of the value of a hexagonal architecture and also of the value of accept miss test-driven development process so not bad for an hour's refactoring I think that's money well spent three specific lessons I took away I think that we the mistake we've made was not being really clear about in an event sourced architecture what is a command and what is an event we had got confused by some of the literature saying that eventsource architecture an offender of an architecture are orthogonal and and basically we had misunderstood what the writers were meaning by orthogonal so distinguishing between an invocation that is going to change the state of the system caused behavior to happen that maybe is irrevocable and a fact that has been recorded about something that happened in the past is absolutely key to vets and once if you mix those two things up you could the architecture can become very messy it also means that you might never be able to properly replay your events because they're expected to create irrevocable changes and you have no way of creating the same change because it was never recorded so if I was building an event-driven architecture I would create a clear distinction between events that are transient and causing changes to happen that are flowing across the wire say and facts historical facts I've recorded about things that I did in the past and event handlers would be before in in the event-driven architecture would be performing commands in the event source architecture and you can see talking about that gets really confusing so that's where our change in terminology so like recording facts in the historical record becomes a lot clearer especially when you've got an event-driven architecture and of outsourcing and for us especially this was much a much more important lesson to learn than command query separation we did have some command query separation but the distinction between command and query I think is nowhere near as as as significant as between command and historical fact events or saying plays really well with other architectural styles so in our case we used event sourcing for the domain and data model we used the hexagonal architecture within our processes within our Kotlin code we had CQRS for data flow within our applications but we didn't expose our event history to any other applications outside a single team so it was purely intra application in true team and that allowed the teams to evolve the event formats in the event history as they saw fit without having to coordinate with lots of other teams and they could rely on rest for that and so we used reffed to integrate applications both within the organization and also outside the organization especially leaning heavily on content type negotiation to d-cup or release Cadence's and hyper media to decouple the sources and sinks of data and and control flow and avoiding any technical detail in functional tests is extremely powerful the functional test shouldn't be talking about what buttons are being clicked or that kind of information should be talking about JSON formats or URLs or databases it should be written purely in terms of the domain model that helps you use them as an artifact in which to discuss what the system should do when you're talking with people who aren't technical you know who aren't on the development team and it also means it's they're decoupled from the architecture and when you make architectural changes they're supporting you in those changes rather than slowing down those changes because they need to be modified as well as the architecture changes and so using the hexagon architecture and a screenplay pattern those two things fit together really well to allow us to make that happen thank you [Applause] all right so we have a bit of time so if you have questions naught is would be very happy to answer them so there is a mic over there I was wondering you said you had a lot of technical details in the event store because it was used as a messaging bus so you imagine you had things that were commands and hooks and stuff like that stored in there did you end up cleaning it off did you take that freedom or did you just leave it there what happened India Minister did you touch it at all or did you leave everything as is when in migrated way yes so we did we didn't have commands in there they were they were events and then the system would look at those events and decide what to do on a regular basis triggered by schedulers for example or notifications coming from post grades but but they just became obsolete when we changed the way the architecture worked to not rely on those luckily we caught them really quickly so that was something that never had to go to production we caught it during our regular chats over tea and so that yeah I mean we never actually recorded those events in production the code might go into production below but it didn't actually affect that you know but we weren't a regulated industry so it didn't actually we didn't really have to keep the original events in their original form that they were first recorded so we could run migrations over the events tour and we did occasionally go in and delete events that were no longer necessary so for example because we were running testing production we would have a lot of events that were marked as test events and they were just not important to keep and so every now and again we would reap those so you mentioned about events that don't mean anything to you uses one pattern that I've been using is like accepted completed right and that's obviously meaningless to users are you saying you'd write that stuff off in almost a separate stream then do something and then when that's done so we did so we did have in our architecture we had something similar in that we would for example if you wanted to invite reviewer to review a paper you the editor would go invite that review and that'd be recorded as an event and then we would have to know that we'd actually sent that and this was something where you know we hadn't come up with an answer that made us all entirely happy and the same with restful interaction do we store URLs in our events you know in one of our systems we like received a command coming in we have the links back we would store the links in the event but it felt wrong to put links to systems that might come and go in a in a historical record of business process changes that would probably have a longer lifetime and so to be honest we didn't have one answer for email sending we had the the editor has invited it and we actually sent them an email and users were happy with with was seeing those even though it was a result of a technical decision that what those two things had to be separate in terms of restful links we never showed those to the to the users because they wouldn't have cared about them and I think if I did it again I would keep the restful links in in a in a you know separate table relating to interactions with other systems rather than putting him in the events dwelling that was another mistake we made put notes in the event storm so you said you use the hexagonal architecture so dumb how much does domain now model know about event sorting like do you keep that completely agnostic to the event sorting or does it know that it's being a red source when you're using the exact level yeah it's to be honest it's a bit of a gray area right so we the sort of pissed the abstraction that we used on the persistence would be like give me the current state of the entity and and that was was abstracted entirely away from the fact that it was recorded you know whether it was reconstituted from an event history or just loaded from the current snapshot but the logic knew that when it did something it recorded an event and so if I guess we were we were somewhat lucky in that that's where that we'd put the reconstituting logic behind that interface somewhat in the adapter layer right or rather we loaded it from the relational model in the adapter layer and therefore it was a perfect place to put the reconstituting from a stream of events if we'd been entirely canonical in our events architecture from the start it the the fold over the event history to create the current state would have probably been in that in the domain model entirely abstracted away from from the history from just from the storage of the history so you mentioned you don't have event buzz anymore can you explain to me how your projections work then I set the pool model push model so it's a pull model yeah and you use your hyper media kind of a way of pulling the this events from the 10 stardom so the hypermedia is actually used more well yeah we ended up with hyper media between we pushed the restful stuff between applications I looked after by different teams and the projections we kept within an application run by one team so one team would look after the projectors required of that data for the different front ends that they looked after and at at the size of our system and the way we architected here each each major event sourced sort of I guess bounded context was looked after by a single team that looked after the different front end back ends and and and things the one situation where that was not the case was we had a data analytics team and they would receive they would they were in the department but they just stream the events out through their own ETL jobs into BigTable and stuff are they in Google cloud and they were they just said we told them we're not going to offer you any guarantees about the stability of the Jason that you get for these events if it changes you just have to suck it up and they were happy with that don't use event bus for projections alright alright Thank You Mikey thank you not sorry [Applause]
Info
Channel: Domain-Driven Design Europe
Views: 5,795
Rating: undefined out of 5
Keywords: ddd, dddeu, ddd europe, domain-driven design, software, software architecture, cqrs, event sourcing, modelling, microservices, messaging, software design, design patterns, sociotechnical
Id: osk0ZBdBbx4
Channel Id: undefined
Length: 52min 46sec (3166 seconds)
Published: Fri Oct 02 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.