The Fundamentals Of Software Development | Martin Fowler In The Engineering Room Ep. 1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hi my name is dave farley of continuous delivery and welcome to the engineering room a series of conversations with influential people from our industry this episode is a little different to the usual content on the continuous delivery channel this is the first of a new mini series and is in addition to our usual weekly output these discussions are meant to be to explore software development from broad perspective and i meant in part as a small kind of christmas present to our viewers and subscribers and thank you for your support over the past year if you'd like to see more content like this please do subscribe and let us know your thoughts in the comments below today i'm talking with a friend of mine who has certainly significantly influenced my thinking and almost certainly yours too martin is one of the most famous people in our industry the author of many important influential books and one of the original creators of the agile manifesto martin has a very wide perspective on software development and he's opinionated and usually writes at least to my mind in his website martinez compiled a valuable resource of definitions learning and insights that i at least regularly dip into to remind myself of the authority definition of something or to track what's on martin's horizon it's a way of crystallizing ideas uh that so that they resonate with people if you've ever refactored your code use dependency injection or created a dsl then you're building on the shoulders of this giant of our industry so it's with great pleasure that i introduce you to martin fowler hi martial thank you very much thank you very much for joining us yeah happy to see you again it's been a while it has been a while uh pandemic aside even before that it's been a while yep so so i i know that you do a lot of work um mentoring authors and speakers and and others in their next steps you've certainly helped me and were kind enough to review the early chapters of my new book even though you were overloaded with work at the time um and you mentioned to me several interesting sounding projects that you're helping people with would you mind talking about those a little bit and just describing the sorts of things that you that you're looking at at the moment yeah i mean yeah i reached a point earlier this year when i realized i couldn't really do any more writing for a while um which is kind of a big shock to my system because i've always had a big writing project going on a book or something that i've been trying to do at but i realized that i'm actually spending more of my time mentoring other people and it's better for me to do that because i mean the reality is i haven't been involved directly in a in a sizeable software project for a very long time now and i've got distant and i could try and change that but hey that would be work um so what's easier for me is to work with people who are still connected with the realities of the day-to-day software and help them get their ideas out because let's face it that's basically all i've done anyway i've never been an original creator of ideas i've always been someone who's looked at what somebody else has come up with like refactoring i didn't invent refactoring i mean it was other people who developed it and then i looked at it and saw this is a really useful technique i think i can explain it well i'm a good explainer so what i'm doing in my book my current mentoring is really trying to help other people with their explanation projects so i've got three at the moment um three what i would call book length projects that i'm deeply involved in which is why i didn't have time to add a fourth because it's just too much uh as well as various other stuff that's popping on my website as well sort of old single articles but the three main book projects are all really quite interesting and very different so the first one i'll start with is um a colleague of mine in india unmeshed joshi who's been working on a set of patterns around distributed systems and i mean it started out really because he felt that he that our folks at footworks needed a good grounding in what's going on inside distributed systems that we use all the time kafka cassandra all of these systems are out there doing quite a lot of stuff this just about quite sophisticated distributed work and even though you're not going to build your own messaging system or database you often need to know a good sense of how they work because without that you don't know how to utilize them properly you don't know how to debunk problems i wouldn't necessarily call it mechanical sympathy because you're not getting down to the hardware level but you are getting you do need some sympathy with the underlying platform that you're working with a kind of platformish sympathy um to be able to at times when you need to focus on performance or the way that things are operating yeah so you start go ahead yep so sorry i just i i didn't mean to interrupt you but i was i was just going to say i i think there are some um i i've characterized the the kind of the uh the distributed computing problem as kind of our version of you know quantum mechanics it's you know you take a rep a relatively tiny step in your software and you're in very deep water quite quickly quite easily if you're not careful and so there are some principles that that matter a lot to be able to you know guide a design to be able to cope with that this explosion of complexity that you buy into as soon as you've got bits of software working on more than one computer in more than one yeah exactly and so he felt that um our folks in um needed to be sort of have a better exposure to that particularly we hire a lot of people straight out of college and the like and they don't have enough background in this so he started setting up some training work primarily based in in our india operation which is pretty sizable these days um and he contacted me and we developed the idea of trying to pull this first stuff out in terms of patents because patterns are always the technique that i found very effective to try and explain different solutions you have to problems and to be able to choose between them and know how they fit in a context with patterns you don't get a kind of set you know do these 10 steps and you lead to happiness it's more like here are 20 things you have to consider and you have to navigate between them and choose trade-offs between them yeah and the way in which that he's developed this is he's gone into the source code of things like kafka cassandra react all sorts of distributed systems you know lots of different languages and figured out exactly how they handle coming up to consensus um and things of that kind and then trying to pull the patterns out and then we've worked together to help describe those patterns yeah so he's been publishing those on my website over the course of the last year year and a half perhaps but we've been putting them out we've just got another batch that's going into um some copy editing review um that go into things like paxos and some of the replicated log stuff behind raft and how really complicated two-phase commit can get when you're doing this kind of stuff yeah it's some really interesting stuff the pax of stuff in particular took quite a bit of mental effort to figure out how to understand it let alone explain it yeah um and this i expect will turn into a book um we haven't sort of lined up publishers or anything yet but i'm not expecting any problems trying to find someone who'd want to pull this thing thing through it and that that's one of them and that's a very deep technical topic to dive yeah it's it's a topic that i i find particularly interesting i i kind of started working on distributed systems a very long time ago and to some degree i think that the technologies have made the problems more difficult because it used to be harder to do it it used to be harder to get you to begin to start to remote you make you know interact with things remotely now it's so easy and the tooling is so good that makes this a small step that's almost invisible but you're still buying into all these complexity that i i wondered whether you know that's an aspect of this that um i i'm certainly not saying that people today are less smart than people a long time ago it's just a long time ago you were you you were just more exposed to the problem earlier on in the process of making the you know making that step i think yeah and i mean i've always argued that the last thing you want is that you want to avoid distribution as much as possible yeah yeah because it's like a complexity booster so if you can avoid building a distributed system avoid it please because it's going to make your life so much easier if you can avoid distribution just like concurrency you can avoid concurrency do so please because it's going to make things so much better but there are times you cannot or you know you're building on top of something that's got this distributed substrate on it and it's going to do weird things to you if you don't understand what's going on and you just have to deal with it and so this is a way of at least trying to visualize and explain some of those underlying things that are going on so that when weird things happen you know why yeah it's one of the things that i i think's kind of interesting is that the you know problematic as it was for in some other circumstances the relational database model of you know the the three-tier architecture kind of system that we that we all built relational databases gave us a model for synchronizing changes that we didn't have to worry about too much when programming against them because they looked after part of the problem with you know in the scope of a transaction and stuff like that and as soon as you start doing this with technologies that are not it gets a bit scary i did some consultancy for a client who i shot sharp name which is a very large development and they were using a non-sql data store that didn't have any transactional integrity and so i you know i kind of looked at this and it made me as nervous as hell because as far as i could see it was just look of the draw what what the state of this system ended up being depending on you know which which which record landed first that there was no management of the concurrency in this system because people weren't thinking about these sorts of things so it seems important to be able to worry about these sorts of principles and so on yeah and i mean transactions even with a single database or not necessarily uh just a simple solution because and i can't hold the transaction open for for as long as you need to so you have to work around that that's something we got involved in with a patent enterprise application architecture book that i wrote 20 years ago and dave bryce sat down and worked and explained some of the patterns that you need to deal with so that you can handle that kind of stuff what we could refer to as business transactions that you could keep open for quite a long time in order for people to do their work but at the same time resolve them against the system transactions that need to be opened for a short time because you don't hold a system transaction open for very long because it leads to things being blocked absolutely even transactions i mean they certainly help because you need that ability to be able to take five things and know that it's becoming it's atomic but um you still have to work around it even in a in a single processor system when we move into a distributed system then there's a whole bunch of things because now you haven't got one clear source or at least you can easily get yourself into a situation where you haven't got one clear source and of course often the solution is to say one node has to be the leader and you have to talk to the leader and then the leader will ensure that the followers match that but then of course how do you have a leader what happens when the leader goes down all that kind of stuff comes in and that's that's when distributed consensus protocols like paxus and raft coming to come into play exactly yes yeah i mean that's that's part of that thing and so it's just been very interesting to dig into that and i'm glad it's somebody else who's going through all this source code to figure out how these open source systems do this rather than me and and he's building it he's doing it the way i would do it which is well i don't quite i need to understand what's going on so let's build my own simple implementation just so that i can illustrate the key point i'm referring back to the real thing and looking comparing it to this and using that back and forth and then using of course those code examples uh to help illustrate and explain um in the book material and i i i always think that's one of the interesting parts of uh of writing or or publishing ideas to to try and help people better understand them is that the difference between it's no good just showing somebody an enterprise system because it's too messy and complicated to be able to see the wood for the trees and so being able to synthesize examples and descriptions that can be realistic enough to demonstrate a concept without being realistic enough to hide the concept yeah and that that is one of the challenges of coming up with examples of course yeah yeah that's it's always going to be toy because you can't make it real because as you said if it's realistic enough then you're not going to be able to understand it but at the same time you've got to catch the core of the problem and and coming up with good example design i mean i find when in my writing um i mean i have been in the situation with this is a refactoring book where i might spend two days just coming up with an example yeah and then once i've got the example the actual pros and the explanation i can knock that off in a few hours but trying to find the right example that just illustrates exactly what i'm after that can be really really hard and unmatches finding this as well is trying to get the right thing that will show what's going on and not be overwhelmingly complicated it's a tricky balance to to grab and it takes a lot of time to come up with them indeed yeah so ed where does this book sit kind of between uh things like patterns of enterprise architecture and gregor hope's book on um on patterns for um i've forgotten the title of his book but there's a messenger messaging yeah yeah um i mean it fits definitely within that family i would say because it i mean it's yeah i mean i think of it as a similar kind of level if you want to understand how the the these core distributed systems that you're building on work to a degree that you can have that appreciation that sympathy right um it's not specifically about how you organize a messaging system which is what sorry gregor's bobby's book do yeah does but um again when you're working with something that works in this kind of way you need that sympathy as to how it's operating under the hood to at least some degree it's it sounds interesting so so is it is it is that a book that's kind of near incompletion or is it is it just starting out you said you've been playing for a while we're well into the process i don't know that we've really sat down at unless she was really sat down and said this is you know where we would think the ending boundaries lie um this particular last batch um but really looked at consensus algorithms like paxos raft and um two-phase commit um those took a particularly long amount of time to work through because they're complicated um things um so we'll see at some point we'll sit down and get a sense of okay how far are we with that but the nice thing is readers can look at it now i mean the stuff i'm mentioning at the moment isn't out there as we speak it may be by the time um this is actually made visible um but there is still a lot of stuff here about replicated logs high water marks low water marks and things of that kind that there's a good chunk of material that that's there on the site at the moment cool that's i don't know what one of the other on where you can use it appropriately i i quite like eventual consistency models as well and kind of the the match between um drawing the right kind of seams in your problem domain so that the eventual consistency doesn't trip you up and i think that's one of the things that we managed to get reasonably nicely organized with the lmac system that you wrote about on your site a few years ago um yeah that you know there were things like we didn't really mind that the order history wouldn't necessarily be perfectly in sync with the the current order picture as long as each was true was true in the context in which you were going to view it right and then and that kind those kinds of decisions imports you have to rely on a certain degree of eventual consistency if you're going to get the kind of through button again it's the classic safety versus liveness trade-off yes i can make a perfectly safe system it just won't do anything i need it to be alive and so you're trading that off all the time it's always ever so everything in terms of concurrent or distributed system because a distributed system is just a form of concurrent system it is a trade off of safety and liveness and the difference between a distributed system and a single process concurrence system is uh with at least a single process concurrent system you don't have bits of your system falling either all falls over or none of it does but with a distributed system you know bits and pieces fall off all the time so you've got to deal with that um but yeah i think it's i i'm very excited by this work but unless she's doing i think he's really sort of doing a good job i think it will be a really solid book for people in the future to learn about how distributed systems work in practice cool and of course the in practice is kind of important because you do certainly run into some things that are talked about a lot in theory but not used in practice because there are practical gaps in that theory paxos yeah yeah we we came to a similar looking i can't remember what i was looking at it might have been when i was right involved in the reactive manifesto but i remember looking into paxos and raft a little bit and you know at least somebody pointing out that there might be some problems right so the second one i'm going to mention is quite different in that it and it's bit unusual because most of what i like to write about is stuff that's already fairly well known just not very widely disseminated now when i wrote the refactoring book it wasn't new in the sense that people have been doing it for years it just wasn't very widely known and similarly these distributed patterns they're not new they're in the open source products that we all use but they're not widely enough understood and so therefore need to be disseminated more this is a slightly different thing this is a book that shimaka dugani is working on on data mesh so here we've got really a way this is kind of more speculative than new because we're actually we're in the middle of first data mesh projects around the world in our thoughtworks practice so this is definitely bleeding edge stuff although in a way it's kind of not because it's taking principles from that we've been using for you know a couple of decades and saying we need to apply this to analytic data so basically the idea of data mesh is a lot of people when they look at analytic data and trying to take an analytic data from across a large organization their approach is well centralize it get it all into one big data storage data lake storage approach and use centralized tools to understand that data and then discriminate it out to its users yeah and what jamaica said was it and the rest of many other people have said um within vault work surprise surprise is saying that just doesn't work in practice that kind of decentral that's kind of centralized approach what instead you need is a decentralized approach where the domain that creates the data is responsible for publishing an analytic data feed they think of that data feed as a product so as well as you know the operational systems that work on that data you also have an analytic data feed which is a product that you think about with product management an organization is then going to have to build a platform to allow people to be able to publish these data feeds and consume these data feeds and come up with a decentralized governance approach um to handle this so that now if i want to pull together a new analytics um a new analysis of what's going on in the organization my job is instead of going to this centralized warehouse or data lake that has everything i ever want except i'm not really quite sure what it all is i actually look at the particular data products from different defaults in your session i can see what they do i can understand where they come from because i'm looking at the actual data product group themselves if i need to change what data is available so look at some new data that they currently aren't publishing i can talk directly to the people who have that data um and then i you know connect up with that it's a much more decentralized approach but is one i feel that is much more realistic to being able to deal with this kind of data because this throw it all in a big centralized place approach always sort of makes me feel terribly uncomfortable because for start i mean the modeling problem i mean what is a customer go into any large organization and you'll find you can't they want to have this single view of what a customer is a single definition and you're smiling because you've seen it too right yeah different parts of the organization they will look at customers differently quite naturally um they will have different things that they'll class as a customer um and you could either try and unify that all together and it's a horrible mess or you can say well we understand we're going to have these different views and we're going to have to live with them and manage the complexity of that yeah and that's the it's a in many ways it's coming up like agile is about realizing you can't predict the future it would be wonderfully much our job would be much easier if we could predict the future but since we can't we have to work we have to manage the complexity of it similarly i think of data mesh is about managing the reality of the complexity of analytic data the problem of course is that not many people are doing it many people are going with a centralized approach so we're having to figure out what the tools are figure out the governance structures figure out how to make this work and that's what we're doing kind of live with the clients who realized that the centralized approach isn't going to work and we're going to have to create an approach that's much more decentralized and smack is doing is taking the experiences we've learned so far and putting them into a into a book form that so that other people can at least be slightly more ahead of the game than than we've been in our pro and our sort of first uh pioneering projects that's that sounds interesting there's another idea that's crossed my event horizon recently which is which is called data pipelines which i don't know whether the people that are talking about that are talking about something similar but that also sounds similar to it's at least at least in me trying to understand what the the way in which you described it the pigeon holes that i'm sliding into are um one of the things that we did at lmax was that we we we were all event-based the whole system right so analytic data like you're talking about was what was generated as a stream of events like anything else but we had a number of different kind of um application specific points in the system that would be in trip that would consolidate the data to be able to tell a particular story about that is that the kind of thing that you're talking about so something exciting but how do you synthesize them how do you synthesize a message off this stream every events i suppose i mean that's part of it um because i mean really the challenge is when you're dealing with a really large enterprise i mean i mean yeah al max was a big project but it wasn't the beginning yeah yeah it was you know you didn't you when you've got 100 different teams scattered around a big organization just dealing with that amount that's the problem you can you can effectively centralize all the data for lmax because you've got one team that's understanding it i mean you also have the core model of lmax in your head right it fits in your head yeah um if you're talking about you know an airline like delta airlines you know you can't fit all that in your head it's just too big too complex and that's when you start have to decentralize your operational systems and also decentralize your analytics so so so the bit the bit that i'm trying the bit that i'm trying to understand is so absolutely you're right in terms in terms of the um the difference in this in the scale but um the bit i'm trying to understand is how you talk how you tell a story so i don't know if you've got a if the output of a particular part of the system is streaming out you know one you know a bunch of orders or something like that you need to go at some point you might need other information to synthesize a picture that you're interested in if you're talking about analytics right yeah so so so so the bit that i was trying to pull out is so um what i thought that you were saying was that you have these streams yeah in in the architecture of the system you have these streams generating this information and then you you you build a target specific you know a focused application on telling you know one particular picture that you're interested in and it gathers the streams that he's interested in it it it paints the picture and does whatever it needs to be able to do to to to to consolidate that picture from the different data streams that the streams of events the streams of information that it's interested in yeah i guess the one i'm missing the point the way i look at it is let's say you're um wanting to do some analysis on data from that's going to pull require you to grab data from different parts of an organization so you know again think of an airline part of it information i need is from customer tying into customers some of it is perhaps some operations of the airline um and how where do i go about to do that so in a centralized approach there would somewhere be a centralized data lake that would have all of this information of customer and operation stuff yeah you know as you know whatever form it happens to be in but a big place in which i just go to that one-stop shop as it were and grab it yeah but the trouble is do i really know where that has come from how is that data organized um again when we've got different views of what a customer is or what a flight might it is um it's it's difficult to tie these things together so in a data lake view of the world instead of having this one big place you go to you will go to the individual business units themselves they will publish the data about what their business unit does as a data product right and i will document that data product they'll have some degree of product thinking that goes to how to deal with that product you'll be able to understand where that data is coming from because it's going from with the organization that created that data so you're going to have to go out and use the platform to find where those data products are and pull them together yeah yeah but the point is the management you're not getting this centralized management approach yes consolidation so and how they publish that whether in terms of feeds of events or tables of some kind of consolidated data is up to the individual data products yeah yeah so i might as a data product so well one of my data products is a fairly continuous feed of events um but i'm also going to publish something that consolidates some of that information together within my bounds and provides that consolidated data and you can use whichever one of those products you think is appropriate for the analytics that you're doing cool sounds interesting it is i mean it's a as i said it it's it is definitely what we're working on at the moment um as opposed to yeah there's a long history of doing this but i think it's an unnecessary direction because when you're talking about large enterprise things trying to do anything involving centralizing data yeah too often being a fool's errand and that's doubly true of analytic data because you've got a lot of history involved yeah so you have to start thinking about making a lot about the temp data by temporal for instance so that you understand not just what is this data as of a time but of of the time you understood the information because one of the problems of course with historic data it's not just the data change your understanding of that data changes yes that's what i refer to as you know time is not the fourth dimension it's the fourth and fifth dimensions um and it ends up being a lot more complicated than you'd think yeah well it sounds like it sounds like an interesting book yeah and she that is not going out through my website that's going out through the o'reilly work so it's actually going to be a an o'reilly book hopefully out early next year um and there are i think some pre the the o'reilly preview system yeah beginning to work on it as well jemack has written a couple of articles that are on martin fowler.com um about data mesh so i think if you want a broad overview of what data mesh is about if you hunt down those hopefully you'll provide links in show notes or something yeah of course but i definitely think this is the kind of direction we want to go in when it comes to managing analytic data across any kind of large organization yeah and that that kind of one of my one of my things in terms of in terms of a approach is just techniques to manage complexity and that trying you know want one data structure to to rule them all is not a way to manage complexity now i mean your diver gets too big and bloated because it has to hold every possible piece of data um or it gets horribly abstract and just becomes you know record related to record and you know nowhere that way um so you're always trapped when you deal with these things and this this i think was one of the big things that i think that really came out of eric evans's work in domain-driven design and the notion that you have to think of any large organization in terms of multiple bounded contexts yeah understand each context individually and how it relates to its neighbors and build that up and in fact that's very much what data mesh builds on it says you're going to have these boundary contexts and you have to think of it in terms of bounding content so another way of thinking about it it's thinking about it's basically applying the ideas of domain-driven design strategic design thinking to analytic data yeah yeah that was what that was precisely what was going through my head when you were describing it good and so i had um i had i had an interesting conversation with eric a few weeks ago we were talking we were talking about something else we were talking about micro services um architectures and he made an option of that observation that clicked with me that i liked which was that the the protocol of exchange of information between the services is a distinct banded context it's not not the same as the the the the context that the the services represent um and there's not there's not one that's shared between all the services that there are multiple ones in different conversations but but i i this is similar this this is a separate kind of context in which and has separate needs no doubt i would imagine yeah yes um very much so um it suddenly triggered a point uh something left to yeah there's actually it was the point that uh there's an article currently being worked on again may have been published by the time this is uh visible by uh brandon buyers who's again one of my colleagues here in in north america and he talks about the fact that when people do these architecture charts they put a lot of emphasis on the blobs but often the hard part is the lines connecting the blobs integration and how into a lot of companies think oh integration is some kind of simple thing um but it's actually not well as we know it isn't and also his point is it's not something you can buy you can buy no blobs but you have to build the lines yourself and building those lines is not easy and it is actually often a part of a critical part of a an organization's competitive advantage is if you're able to do integration better than your competitors um it can really give you a noticeable edge and so i i was very uh taken with uh what he's saying in that article um so hopefully that will be published in the next few weeks cool and the third book in your list the third book so the third book is sort of back to stuff that we know and have known for a long time um in a really interesting way so um this is a collaboration between um at least two of these folks you'll know james lewis and ian cartwright um and rob horn's the third guy who is newer to fort works so you probably haven't come across it and they're working on patterns of legacy displacement how do we displace legacy systems in a more sane way and of course a lot of this is about fighting against the oh well let's spend five years building a replacement system and then we'll switch it over because we know how well that works um and but the thing is we haven't in all of the years i mean i've been in software business for 30 years or so during that whole time replacing legacy systems has been a major part of our work and yet there's not been very much written about it and not much understanding of how do we think about legacy replacements in an effective way and so what we're trying to do with this book project is capture that information again in the form of patterns yeah um so and you know because these folks have done a lot of uh legacy replacement of course because that's such a large part parcel of our work and they're trying to get that information down and in place and so i'm really keen about this project because i think it's going to help a great deal to our understanding of how we best go about this exercise which is just not talked about enough in our industry and and yet it's such a central part it's not as if it's going to stop you know as i started to say we're building tomorrow's legacy systems today um and if we can better understand how we displace them then we can build better now but we can also get through a better approach of understanding that and of course the key to this is gradual process you know you don't displace a legacy system all in one go you do it over time it may involve creating transitional pieces of software that you know you're going to throw away yeah in a year or two's time but it's going to ease the process of that legacy change because it's a constant um process and so it's this is early days um they dropped their first bunch of patterns onto the website um a few months ago um at the moment we've got a second batch kind of the um that's sent out to our internal review list on the infamous fort worth software dev mailing list which um everything has to go through um so again probably by the time this goes out that second batch will be out there and i'm really hoping this can turn into an important book as well so i think they've they've got the knowledge um they've got the desire i think to get this information out the challenge of course as for any footworks consultant is finding the time to sit and write but we're working on that i think this could be a really good book cool and is is that so so you said there's not much written that the the only one that i can think of in that in that space is michael feather's book on working with legacy code right that is this a different kind well i guess this is a slightly different kind of book to that yeah operates at a higher level because i think what we're talking about is when you've got a legacy system that again is running a whole enterprise yeah how would you replace that so we're talking about components which are themselves sizable systems you know a component of this i mean l max will be one component that you kind of say oh yeah that's l max that's a blob right how do you replace that blob with all of these lines that are connecting to it in a way that you know it's not going to drive people insane um and so we're operating at that kind of level um and so i think it's it's very much a compliment it's kind of more like the enterprise architecture um complement to what's in the legacy code system is how do you take a particular individual system and replace the bits of it um so there's and it's definitely going to be lots of overlap i think as i dig a bit deeper into doing that um but uh i mean i mean coach particularly i mean the first batch of patterns that the the most the key central pattern i put in was actually an anti-pattern effectively which is that of the um the feature um replication uh what was the term they used i uh have to look at it again to remind my what is the word uh feature parity right where we say oh if we're gonna replace the legacy system let's build a new system that has feature parrot to the old system and i've seen smile on your face we know how that normally works out yeah so part of that pattern was to say don't do vists or at least feature parity can work but only in a very limited set of contexts and that was one of the things that we worked through in writing the pattern i dislike i'm very wary of saying something is always wrong but i am very conscious of saying well things are often used outside their context of applicability feature parity can work but it's such a narrow context that it does that you've got to be aware of that and realize that you know most of the situations we run into it's not the right context yeah um and so the one of the first patterns they wanted was to say this is where future parity breaks down where you have to use some other kind of approach instead yeah yeah absolutely as i did just just as one of the things that was going through my head when you were describing that is many many many years ago i worked on a system that was replacing i think this is probably the third or fourth replacement of um legacy systems in a car manufacturer and this was a car configuration system that described all of the different bits that came together to to to make up a car and the we were writing this in java and the core the the core model that we had to retain was fundamentally based on the 80 characters of a punched card we couldn't get away from it because all of everything we laid on the relied on this massively complicated kind of um overblown customized version of some kind of weird len paul's if algorithm that kind of if this bit and this bit is set it means this so if this bit in this bit is set it means this other thing entirely and it's just it's just this overloading of information and building and testing those sorts of things it gets complicated quickly but it's certainly an interesting problem and and definitely definitely a place where there are lots of patterns to mine i'm sure yeah i mean one of it i mean i'm going to again talk about one of the things that's currently under in the review process there's a pattern that's called um divert the flow um and this is a very interesting pattern that the core heart of it is actually in another pattern there's a very sick common situation i've kind of implied it already where your business management relies upon some system that aggregates aggregates information from all over the enterprise and pulls together and it's a critical system um and the third return to in the pattern terms we're calling it the critical aggregator and this is important because the key map leadership and the organization they're making decisions on a day-to-day basis that's based on this aggregation of data now having a critical aggregator is itself not necessarily a bad thing in fact it's usually a good thing because you want something that pulls together critical information for order to make decisions yeah the problem of is that in most legacy systems it's metastasized into this awful thing that will reach deep into coordinated structures of operational systems yeah and as a result you can't touch anything because i don't change this data these five tables over here because i'm scared that it's going to break the critical light aggregator because it's a critical aggregate it's critical it has to keep running yeah so how do you deal with this so one pattern that that was being they're writing up at the moment is called divert the flow and divert the flow says it's kind of counter-intuitive in the way but it says the first thing you should replace in a legacy system is often that critical aggregator rebuild the critical aggregator and give it better interfaces so that you can actually substitute the input points better yes um the alternative is basically when you is replacing the the upstream systems but creating a legacy mimic that sort of looks like the old system so the aggregator can still work because usually the problem with your legacy okay is you can't replace its connections because they're just so deeply entitled so you have to pretend the old systems are still there with your new ones yeah the problem with legacy mimic though is if you use a legacy mimic yeah it complicates the building of the new systems because you've got this more messy thing to deal with but it also means if you've got opportunities to provide new information that would be really handy for the critical aggregator you can't do it because you can't go through that legacy connection well if you replace the critical aggregator first it gives you greater safety in replacing other parts of the system because you've got a much more sane time to the critical aggregator but also if you're now using some information but you didn't have it before you've got the opportunity to feed it into the critical aggregator because you can update the critical aggregator to use that new information and so even though it's counterintuitive because you kind of feel well that's the critical aggregator i don't replace this first because it's scary yeah often replacing it first can be the best route to go and so that's a pattern that we that they're referring to is divert the flow and the the the divert the flow is a is a metaphor of saying but if you if you want to replace um a dam the first thing you want to do is divert the flow of rivers so that you can work on the dam without having it be affected by the so he's down from upstream so once again referring back to eric evans you you're gonna you you're gonna use that and start to build anti-corruption layers to to allow you to assemble the new the new critic the critical aggregate yeah yeah so that's the kind of pattern work that they're doing that gives you a sense of the level that they're operating yeah i think how it relates i think to the to michael phelps's work um and i'm i'm very keen on seeing how this kind of stuff develops because this is definitely stuff we've done many times with our clients over the years and often the challenge is getting people to understand the trade-offs involved because the trade-offs are often not straightforward because people do come into this in the sense of oh let's just go for feature parity and move it to the cloud yeah and we go no that's probably not what you really want to do and if if it's really a legacy system you probably don't know what all the features are anyway because all of the people and the people have come in the dock it's not their poorly documented well exactly and of course often it's doing things that you don't want it to do indeed yeah and so so yeah there's a lot of stuff around that so i'm keen to see this one and so so there you see the three um books i'm mentoring i need to let you whiplash that i go through switching from one of those [Laughter] i i i said in your intro that you had a broad perspective but it's nice because i don't have to be the expert on any of these topics right i i i mean i'm relying upon the others who are actually got their fingers deep in the practicality of it yeah yeah yeah the reason progress would would like progress to be faster on this legacy displacement is because of course they're all billing to clients doing this work doing exactly this stuff on the day-to-day basis yes um and um it's great to be able to tap into that and my role is just to i mean i've got enough knowledge to ask good questions and enough ignorance to have good questions yeah because you need a bit of both um i think to be able to to do that and uh i find it a lot of fun because i'm getting the chance to say well what do you mean by this how is this working out really interesting conversations i i i think that's one of my skills is that i i i need to dumb things down to understand them so i can ask the dumb questions yeah yeah and what i also like is that these these books are all also very very practical um you can really take away what they're saying and make use of it right away which is something that's always been very important to me um by writing whether it's the books or the website i want i always want to say what can people do with this on monday yeah they've they've read it they've studied it now they've got to take a while take it away and use it absolutely so so yeah so that those sound fascinating i'm looking i'm looking forward to all of those um and i don't do much of the legacy stuff the legacy integration stuff the legacy refactoring stuff these days except in the context of trying to make things work in the context of continuous delivery but that's still i think there might still be some useful patterns in there yeah oh yeah so in addition to this i i remember a year or two ago you also did an awful lot of work on assembling patterns around different branching strategies and we were talking about and that's a topic that's kind of i have some interest in and get some argumentative feedback on on this channel from time to time one of the great things i'd like to talk to you about in a slightly uh interesting well i think a slightly different way rather than just going through the patterns themselves i think i might be a bit more opinionated in on this topic than you and you've already touched on your approach to these things is that you don't like to say that people are wrong you want to talk about these ideas in the in a broad context um where do you see the balance between for once of a better term kind of cataloging patterns and making recommendations because at some point you're going to have you're going to be you know explaining some your critical faculties on deciding whether this is a matter of advice and you don't want to be advising people to do things that you think are are really bad advice that are bad ideas so how how do you figure out how to draw that line is it is it there's got to be some context in which you can picture that it works or or you know how how do you figure out where where you know how to do that well in part of me is is it's a sense of a sympathy with where people are and have been i i'm one of my big sort of non-software things i do is i read a lot of history um reading history is always and i've always maintained but if you want to understand why things are the way they are it's good to understand the history um because i i remember those criticisms when i wrote uml distilled that i had a little bit at the beginning of the book which is here's the history of object modeling but in order to understand the uml you had to understand the history that led to it because otherwise all sorts of decisions didn't make sense but then when you saw the history you go oh i say now that makes sense as to where it is um and it's like that with legacy systems i said well people did this for a reason it wasn't necessarily because they were being stupid they often had good reasons but circumstances changed since yes um and so if we can understand and have sympathy for where they were um a friend one of my colleagues i remember who now sort of referred to as it's it's i think it was you know i can't remember who i was who said we've got we've got to think of it in terms of compassionate coding have compassion for where people were as they made their choices and understand that we can get a better sense of where trade-offs come and part of my thinking here is that when we see patterns out there that look awful like the critical aggregator um it's often there's a good something good there it's just been taken out of context or it's been you or been implemented badly or something of that kind and this is particularly the case when it comes to branching i mean the i mean we battle and you and i both battle against this the pervasive use of large-scale feature branching and pull requests and the like but it actually comes from a good place i mean if you're running an open source project the feature branch model where you've got you know people who are contributing to the project who you don't know terribly well they're operating on a very low duty cycle in the sense that you know they're not working full time on your project they're working maybe you know half a day a week or something in that kind of world feature branching makes oodles of sense it's an extremely effective strategy yes but the concept again patterns are interesting because they talk about context a pattern is a good pa is a good thing to use in one context becomes questionable when you move it to another context so you take a pattern of feature branching you move it to the context of a team that's working full time there's heart there's half a dozen of you or a dozen of you um suddenly your contact's different then suddenly feature branching becomes less apparent as you begin to realize oh now i get slowed down because i do feature branching because i can't refactor effectively and particularly when you begin to employ other techniques such as self-testing code that allows you to be much more confident about detecting breakages than you would be otherwise suddenly other patterns become appealing and i mean a large part of me writing the branching patterns was because i understand the value of feature branching in its context where it comes from yeah and trying to say okay so why is it that there's this sadly minority it seems of people who are saying no no no we've got this better tool called continuous integration we should be doing that instead in many in many contexts that we're coming to yeah and wanted to really explore that without necessarily saying well feature branching is evil and we should never use it any of the time because it's it's not that situation it's a situation understanding your context in your trade-offs and at the heart to my mind of it is this issue of integration frequency if you can increase the reintegration frequency you get a huge amount of benefits because you're able to um refactor in particularly you're able to refactor more frequently and therefore keep your code in a healthier state yeah so how do you improve your integration frequency well you've got to you know integrate more often obviously yeah yeah um feature branching has this notion of saying i can only integrate once my feature is complete yeah so you can improve the only way you can improve your frequency is by making smaller features well actually that's a good thing we like small features yeah you can't get them down often to be small enough which takes real effort to get them down to be small enough to the level that you can achieve with continuous integration where you can be integrating many times a day yes but that requires a mental shift that says i don't integrate when my feature is complete i integrate when i'm in a healthy enough state to integrate and i can make progress keeping my code healthy pretty much all the time when i can learn how to do that i can easily integrate half a dozen a dozen times a day and then once i can do that then all sorts of benefits of high frequency integration come to me but i have to switch my mental node from saying i don't integrate when my features complete i integrate when i've got a stable um build that i can integrate with and i want to get that stable build all the time yeah yeah the the the speed of feedback is he's up as as i know that you would agree it is absolutely critical to being able to get that that that clarity of picture frequently and i i like the way that you describe that i i i think i don't think of myself as a dogmatic person and i think i think i would think i would think of things in the same way it's about the context i think that one of the things one of the things that sometimes frustrates me is when people think about you know any patterns really is is is just applying them blindly without reading the bit of the pattern where it says use this in this context or this you know this solves this problem and it might not be the problem that you have and yeah and i said i see that misapplication a lot because we just seem to work on a fashion sometimes yeah there's a lovely quote that i will include that from a tweet from camille fournier um that's in as i mentioned in the article says conflating open source software and private software development team needs is the original sin of current software development rituals yes and i think that captures it just perfectly that's the problem i mean what works in an open source environment isn't the same as what works in a in a private software team it's a different kind of culture and you apply different kinds of rules right back to my right first pattern writing i did when i wrote the patterns enterprise application architecture book that was partly a because i was ticked off as people saying there's one true architecture for all enterprise systems we're saying no open enterprise systems are different not everybody has the same problem so therefore you have to pick a different set of patterns yes and the nice thing about patterns is they lend themselves to that line and sometimes the trade-offs aren't completely clear you've got a lot of grey space and blurry lines between things and that's that's okay but you have to understand here are the contexts where different techniques apply pick the ones that work well in your context that fit together with each other absolutely that that gets me on to another thing that i wanted i wanted to ask you about so so so if i may quote you back at you from your website um on your website it says if there's a theme that runs through my work in writing on this site it's the interplay between the shift towards agile thinking and the technical patterns and practices that make agile software development practical well the specif specifics of technology change rapidly in our profession fundamental practices and patterns are more stable i'm really interested in those kind of deeper insights too in terms of what are the durable ideas and it seems to me that often we we as software developers get a bit obsessed by things that are really a bit more off or more ephemeral than we believe them to be and that the real value is is in some other things could you expand a little bit on that idea and what do you see as some of the underlying practices and patterns that might you know be more be more generally applicable i i perfectly accept maybe not maybe not globally applicable but there are always exceptions but but what are the principles what makes what are the things that are likely to end up with a higher chance of success and less likely to end up with tomorrow's legacy system today um well i mean that's yeah that's what my whole writing is about right it's trying to identify those um i mean some of them are quite broad and some of them quite narrow um but the point is that you the i mean take um self-testing code for instance so i i use i like to use the term self-testing code as opposed to test driven development because test driven development is a technique i love i really like using it a lot it's but to me the the core thing that i want is this self-testing code ability which is i want to be able to be able to throw a command at the system and say test yourself and it comes back and if it comes back green i know okay i'm okay the change i just made i didn't break anything now if i build the system using test driven development i'll get there and test driving development will also help me with the design process as well so it's a great technique but the key output to me is having that stealth testing code and there are other ways to get that as well but most of which are not as good as tdd but it's self-testing codes the key so for me that's a great technique but i can use any almost anywhere right and i've i've used that technique in you know you'd use it in small talk you use it in java you use it in javascript all sorts of different languages you have to make sure you get the tools to help you do that and fortunately particularly after the rise of junit people realize what but actually it wasn't that difficult to build tools that would help you build self-testing code yeah and so you see j-unit ports and clones um to all over the place and of course junit itself was a port of an original small talk um library yeah um and so that's a an example of that kind of fundamental notion that says um if you understand the importance of sales testing code whenever you go to a new environment the first thing you want to do is figure out how can i get my self-testing set up going how do i get that situation where i get that logical green bar that tells me okay the change judges made i didn't break anything and so that's an example of one of those fundamental principles that to me transcends the technology that you're working in and when you're in a technology where it's difficult like ui technology often has that then another principle comes to mind which is the humble object which says if i've got something that's difficult to test let's get every piece of behavior i possibly can out of that object into a separate object that i can test easily yeah and then i can you know to relax with my green bar again so that's another technique and you use it with ui technology but often also with distributed systems um remote interactions you immediately say okay i want to make a really simple gateway object that doesn't do very much so that i can test everything else and keep everything else under test um and so that again a humble object a great um basic um idea that once you know it you can use it in a whole host of different places those are the things i i'm after whether they're very big like self-testing code in terms of scope or very small like humble object which is just a simple piece of how you get that kind of thing to work yeah yeah yeah absolutely i mean i mean the the the idea of i tend to use the term tdd but self-testing code is i think much it has deeper implications than people who don't practice it realize very often i think and you know what as you say you know you get that surety that yes my code does what i think it works but it also it's also the the shortest route to getting feedback on the quality of my design that i know right how to achieve if your test is hard to write your design is bad it's nothing right it's not test fault it's you've got a bad design so change the design well exactly that's the the beauty of tdd for me is that it forces you to think hard about interfaces yes and we know but into getting good interfaces is such a key part of getting a well-structured system because if you can get your interface is working well and that makes your code clear and understandable makes it easier to change um and also you know i i can deal with a certain amount of mess in the implementation details i'd rather not but i can deal with it if it's contained and encapsulated behind good clear interfaces yeah but it's hard it's hard to come up with a good interface it's hard to learn how to do that as a as a developer and tdd's great strength is it kind of forces you to think about interface get interface and test it if you'll forgive me advertising my book for a minute um i i i i i went i went through an exercise i wanted to demonstrate some some unpleasant code i wanted to demonstrate some some problems in in the code and so i started writing this example as we were talking about earlier on and trying to come up with an example and i started where i always start i started doing test driven development and i couldn't i couldn't write code as bad as i wanted it to be and do test driven development so i had to stop writing the tests and write the code as you know i felt like a time traveled back 20 or 30 years it was just really just crazy um and so i i think i think that's you know that's a deeply undervalued um practice it's one of those things when i first when when the light bulb went off over my head and my first practice test during development i thought oh this is going to take over the world i was wrong twice i thought that i thought that object orientation was going to take over the world when the light bulb went over my head when i learned that too and neither of those was quite quite taken over the world in the way that they expected um but but i still i i still think i don't want to work in a team ever again that doesn't do test driven development really you know day to day because i think that's much by far the most effective practice that's probably surfaced during my time as a software developer yeah i mean i'm with you if i was working if i was doing real work again and having to write the software on a team for a living i would definitely use tdd yeah yeah um the software i do write is just my tool chain for my website um and i don't use as much tdd on there as you would need to because um because it's just producing output so half the time i'm i don't have an expected value because i'm writing the code and looking looking at the screen saying is that okay and also i have the perfect regression test suite because i just build the entire website and diff it against one that i know is good which is a very crude regression test but it works very well but then occasionally i'll come into situations where there is some more complicated behavior involved and then yes i do have a bunch of unit tests to handle those because and i sometimes use tdd to do that um but again it's knowing when to apply it in the right kind of circumstance and once you know for most of the kind of commercial software that we work on at footworks tdd is is an essential and one of the nice things about this is i'm in a new i'm you know living in an organization where things like tdd and continuous integration uh are seen as a normal way of practice yes and that's to do with not so much me but all of the leaders you know people like brandon uh people like eric dernenberg um people like unmesh who carry that leadership through across the organization because they've come to the same conclusion that you and i have but that's how we find find ourselves most effective and um we just have to see how that spreads across the rest of the industry i mean it in some ways it's moved faster than i thought it would actually um because i know how long it takes for ideas to propagate so yes it's depressing because he'd like to think that he could have been a bit more widely used and particularly the way in which so many people are taking on agile and completely forgetting these technical practices actually are the underpinnings to make it work effectively um but um but as i said i mean these things do take a long time to to work through particularly when we're in a profession that can't really measure our output and productivity effectively and when you can't do that it makes it much harder to to to effectively use the scientific method on ourselves because you know if you can't measure your outputs then it's very hard to tell whether one thing's better than another well that that's that so so so i i i thought that for a long time um but i i was i was somewhat impressed by the dora metrics oh yeah and the use of stability and throughput as measures of efficiency and quality what what's your view on those oh i'm a big fan of the dora stuff i mean that's why i wrote look forward to uh nicole and jess's and jean's book um i i was i'm often very skeptical about these kind of about a lot of the kind of measurement studies that i see in academia um because i because again it comes down to this how do you measure your output and are you even measuring the right thing if someone says oh yes we get more function points done than somebody else i go well yes what does that mean um hey are you consistent able to consistently measure these function points of which you speak and b does it actually matter because actually i would rather write software with less function points that provides more value to the users and allows them to get their job done better that's what matters um so so many things have fallen apart based on that what i liked about what dora was doing was it really looked to try and correlate software development activities with business outcomes yes and that was the kind of one of the fundamental ties in that says if we can correlate organizations that do well on the business level with some of the software practices that they do then we feel we've got something that has a real value yeah and initially when jazz talked to me about it i mean when my very first come across the door reports i remember reading the dora report and thinking this feels like complete yeah this is jazz jazz doesn't do so i need to talk to jess and he needs to explain to me why this stuff isn't as bullshitty as it looks and his basic answer was well i could do but you better talk to nicole because she's the real um mind behind this we got on a call um and she went through outlining the kinds of techniques she was using and although i don't understand the details of the techniques i got enough of impression to convince me that this was actually kosher and at the heart of it as i said is this connection of business performance correlating to the technical techniques and then looking at how that correlation operates to begin to say oh okay there's actually some causation involved in this yeah and then that led to many things and in fact um we kind of joke that i i was having dinner with nicole and said come on if you don't write this stuff up i'm going to write it up and that kind of teased it to say oh i've got to write this stuff up but i've been stealing this um and if that was true i was happy to take the credit and it led to the accelerate book which i think is such an important book yes because it really does demonstrate very solidly the kinds of delivery practices that we argue about argue for yes um this really good solid data that says they are effective and people should be using them yeah and profoundly so i i i think one of the things that trips us up as an industry very often is that i think our discipline is one that probably appeals to people with a technical mindset i i i probably a science background of some kind and so on and an event towards mathematics that seems to me the kinds of minds that enjoy solving problems in software um and the problem there is that i think often we look into too much precision and what we're talking about here is sociology and sociology is not the same as physics it's not it's not hard maths you can't prove it it's hard to carry out experiments with genuine controls and so you have to take different techniques but nicole has applied those techniques diligently this is genuine science in the company at least at the level of sociology um yeah and and i will come up with this predictive model this correlative model of of the ways in which certain behaviors lead to certain outcomes certain outcomes are predicted by by those behaviors and all that kind of stuff and that's a tool that we haven't had before really yeah with the same level of rigor yeah it's a third one in a few cases where i've looked at something and it hasn't fell apart sort of within five minutes of serious examination um and i'm a big fan of that work and very keen to see the further stuff that she continues to go out and do um i think really impressive stuff i i think i think the measurement measures aren't perfect that that you've chosen measures that are going to kind of suit the act the outcome a little bit but but but but i think that the best we've got so far you know and and that's i i agree with you i think it's a deeply important book and we certainly pushed our clients to try and pay attention to the four key metrics and um i mean and they aren't perfect partly because they focus on the delivery part which is once you've written the code getting into the production and of course there's a whole before part of that that we've got to look at but the core idea of small cycles rapid lots of small steps is at the heart of this absolutely um that's always been something that we've been uh big fans of course you know do small things set up feedback loops um operate from there i i came up with a nice analogy recently i i well i actually wrote it in my book but but it's it it's it's like it it's like the importance of feedback really and the way that it drives nearly everything is that you know if we if we're tasked with balancing a broom you could we could calculate the center of mass of the broom and the little dome on the handle of the broom and precisely position it so that the center of ma the line from the central mouse passes through the point of contact on the table and there's no impulse on the broom and there's one answer to that problem essentially or we can put the broom on our hand and we can move our hand around and the the rate at which we gather feedback and react means that we can either move our hand really small or make big moves but you know that's how space rockets work this is not this is not this is not lower quality this is the most effective strategy to dealing with change in problems and optimizing for that fast feedback is seems to me fundamental and essential it's the tool that i use in my clients to to guide them towards better practice and it works yeah this is the metaphor of uh of that uh the pr pragmatic programmers um dave thomas and andy hunt came up when they talked about um trace the bullets you know if you want to shoot it to targets you can carefully calculate absolutely everything to get your absolute shot there yeah but often the best way is to just fire a lot of shots and have tracer bullets that show what's going on and you'll get there quicker yeah as long as your bullets are cheap that works and of course there again we talk about context um different contexts require different approaches but in software our bullets are cheap it's very easy to come up with new versions of software we can write software really quickly so we can afford to go that kind of direction now not all software in that case there is certain situations in safety critical software or software it can't easily be updated like software you put in a uh a space probe for instance it's going off to the outer planet to the solar system continuous delivery gets a bit tricky under those circumstances and you have something different it's it's that's one of the uh that's one of the areas where i i think i i i slightly disagree in that i think that working so that your software is always releasable even if you don't release your space pro ball oh yeah absolutely absolutely deployment yeah yeah but the point is that um you will understand when i'm talking about the traceability in that context i am looking very broadly in terms of getting into production yeah getting the feedback loop from there because that with commercial systems that's what you want you want to get your stuff out there people using it telling you why it's not effective either directly or indirectly and then modifying and improving you can't do that with space probe because the feedback loops can't be set up to be fast enough yeah yeah deployment but that doesn't stop you doing continuous delivery as you point out because as you're building something you can still set up a test environment and go through that same problem yeah and and and do a lot in simulation yes so i i'm working with several suppliers of medical devices of various kinds of moments which are safety critical and you know you you you try and it's great when you can shorten the time scale into production because that reduces the delta and therefore the risk you know of change that's going into production but getting that feedback however you can get it in and then pragmatically you just get as close to as frequent as possible it's it's my it's one of my criticisms of the dora metrics is that i think that the release frequency is not always possible in all contexts and so that kind of compromises them a little bit but that's important of the simulator right and people would shy away from writing a simulator um because you know it's not necessary right it's not part of your final deliverable but simulator can be a hugely valuable thing to help you get to where you're going i mean if you're building some hardware and the hardware is not going to be available until you know a year's time you're writing your code in the hopes that it's going to fit hardware is you know questionable but if you rebuild a simulator of that hardware then when it comes when you actually have to fit the code to the hardware what you're looking at is are the variances between the simulator and that hardware and if you've built the simulator to the same specs um hopefully that variance can be fairly small and when you get integration problems you've got a better chance of figuring out what's going on because you can say either my simulator assumptions were wrong or i've got a bug that i can replicate in the simulator yeah i i saw i saw a great example recently in a tesla have apparently um upped the charging rate for their model 3 cars and they did that in i think it was three hours so so the charging the maximum charging rate and the model 3 has gone up from 200 kilowatt hours to 250 and they did that in three hours because it's test driven development the car and they ran it through the simulator it passed and so it went on to the production line three hours later in the car you know yep so so you know i i think i think i think there's some i i think that when we're talking about continuous delivery that we're really talking about genuine software engineering principles and i think i think that's you know that's why this stuff works yeah absolutely i think i think that our time is up and i would like to thank you for the fascinating conversation and exploration of all of these these topics and thank you for for agreeing to to join us today um let me say thank you to our viewers so as i said this is the first in a short series in the lead up to the end of the year um we'll be releasing these weekly uh martin's the the first guest and um as you can see deep broad insights into the computer industry i hope you've enjoyed listening as much as i'd i've enjoyed the conversation if you have any observations on any of the ideas that we've talked about just add them to the comments below thank you and and also to the viewers if you enjoyed this um do hit the thumb button on the bottom and indicate your appreciation um i won't do my heavy cardboard routine it's probably not appropriate for a professional situation but hit the thumb button say you liked it subscribe to the channel and enjoy more of what dave produces thank you marty [Music]
Info
Channel: Continuous Delivery
Views: 102,110
Rating: undefined out of 5
Keywords: Martin Fowler, martin fowler interview, martin fowler refactoring, martin fowler agile, martin fowler microservices, distributed systems, patterns of distributed systems, software podcast, software developer podcast, software engineering, continuous delivery, devops, Dave Farley, software development, the engineering room, engineering room, computer science, thoughtworks
Id: 0TwoubGSXpc
Channel Id: undefined
Length: 79min 34sec (4774 seconds)
Published: Sun Dec 05 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.