TDD: The Bad Parts — Matt Parker

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thanks everyone for coming I know there are lots of talk so you could be seeing anything given moment you definitely made the right choice this is a talk you should come to and so this is a talk about TDD and it'll make the most sense if you've already been practicing TDD for a little while so I'd like to get a show of hands who here has attempted TDD at one point in their life great now who here has done TDD for a year okay who's done it for three years okay five years ten years okay so Joe Moore is the only person with his hand raised anymore he was the guy that you know stood on stage and bragged about how he's pair program for thirty thousand hours alright so this talk is called TDD the bad parts and I really do want to talk about some problems with TDD but I also like this name because I'm pretty sure it's half the reason all these people came to this room it's a provocative title I work for pivotal so it may sound odd that somebody from pivotal was talking about bad things that go along with TDD but actually you know Duty D is a practice in like any other practice there's pros and cons there's trade-offs and there's things we've learned along the way about how to how to manage this practice so anyways yeah I want to tell you a cautionary tale I start at pivotal about four and a half years ago I've been doing TDD for about a couple of years at that point how I've learned it at a previous job to some extent anyways and then I came to pivotal and I work with all these amazing passionate people they're really good at doing TDD and I learned a lot from them very quickly and I want to tell you a story about one of the first projects that I was ever on okay so there was a start-up founder that had an idea this idea was beautiful it was a really good idea and the startup founder didn't know anything about building software and that's okay because they had money and they were able to use that money to go hire an agency they they found a little boutique shop to start building the software for them they did have somebody that they knew that told them like hey if you're gonna go build software make sure you're building it with people who do agile software development that way you'll get to see demos and stuff like that and you'll be able to react to it and kind of change the the product as you go and this was like great okay let's do that so they found this boutique agency and they went to them and that agency assigned a developer to the project and they built something very quickly the startup founder got this demo was really impressed by it like it came so quickly give them feedback on it and they started working to tweak it and they were like great let's just sit back let the demos roll in we'll give the feedback pretty soon we're gonna be super successful but then the demo started happening less and less frequently you know at first it was every week or so and there was a lot of changes that would happen in between each of those demos and then it slowed down and seemed like it'd be every few weeks before they finally get something and the the amount of change in between each demonstration was less and less over time right so what was going on it was it was confusing this went on for about a year and eventually the startup founder said okay I have another idea I still have money and I'm gonna go talk to these people company Panda Labs because they might be able to help me and they've gotten a recommendation from somebody and they didn't we came in and said sure let's let's let's help you figure out what's been going wrong and see if we can help you course-correct so we got access to the code base and we started running the tests because this this agency had actually built tests for the software and so great let's see what happens so we started running the tests and they're running minutes went by several minutes and then about an hour went by and they were still running and they just kept running and running and running and running and it was six o'clock and we were like okay it's time to go home what do we do right it's still running on this development were station so we just went home like I don't know what could go wrong so we came in the next day and the computer caught on fire it had not actually caught on fire but it did run out of memory it totally crashed this computer just ran out of memory we're like well that's really bad so we were able to actually quickly almost miraculously figure out what the memory leak problem was it was actually something very trivial we fixed it and then we were able to rerun the test suite and the same kind of thing happened right it took a long time to run we got to the end of the day or like it's still going let's cross our fingers we came in the next day the test we'd had finished but this is what it looked like it was it was mostly read very very few passing tests mostly read not what you want this had been going on for a year they had built this test suite that had thousands of failing tests that took over a day to run what went wrong this this way it just blew my mind when I saw this okay we'll come back to this story at the end of the talk the next thing I want to do before we can talk about some of the things that went wrong we have to make sure we're clear on why we do TDD in the first place because it helps us understand what about TDD practices are good and bad and what we can do about them so at pivotal labs we've been doing TDD for a long time now our founder Rob me started doing it with Kent back back in the late 90s and turned our at pivotal labs consulting company into one of the first extreme programming consulting companies out there and so for at least about 20 years now we've been actively practicing TDD and kind of honing that craft but we don't do it just because right just because Rob said we do it because it's been proven over time but what is it actually enabling us to do so our goal is not TDD TDD is more like a means to an end the actual end that we want is to go fast forever we want to enable software teams that can start day one building software rapidly put it in front of users get feedback take that feedback do something with it adjust the software put it back in front of users over and over and over and over and over again and do that indefinitely we don't want to see what happened to this client ever happened for one of our teams the one thing you need if you're going to go fast forever is clean code because bad code will slow developers down who here has been slowed down by bad code before everyone right you've all been slowed down by that code it doesn't matter how good you are right put the best engineers in the world on a code base with bad code and then we'll go very fast in fact if they're really good engineers let go even slower than others because they'll spend a lot of time cleaning it up right at least at first right but that's okay so if you want to go fast forever you have to have clean code and how do we do that well cleaning your code is like you know taking a bath it's something you don't just do once right it's something you do continuously you're always doing it's like daily hygiene right you do that through this process called refactoring yeah who here practices were factoring okay cool this is a good audience we're factoring is the process of keeping your code clean at any given time saying okay I'm gonna hold the behavior constant I'm just gonna make sure it keeps doing what it currently does but I want to fix the code right the code is kind of messy in these certain places let's go clean it up that's a processor we're factoring so the real question is why don't more teams refactor their code why don't they always keep their code clean that's because a lot of teams become afraid they become afraid of what might happen if they change the code in certain ways they're not quite sure if everything will still work right so you're gonna refactor your code continuously you need confidence you need a lot of confidence actually that all the changes you are making are still resulting in working software that you can put back in front of users there's no sadly there's no like magic button you can press I get that confidence and shipping your software to a QA team to manually run a bunch of QA tests right is not a great way of getting that confidence because it takes so long right to get that feedback right that's a super slow feedback loop we want a super fast feedback loop so the way you do that is by writing tests right this is the real reason you write tests because you want the confidence to get refactor your code to keep it clean and go fast forever and it's actually if everything depends on tests right all this stuff eventually depends on test that tells you when you should do it it's the most important things so that's why you do it first that's why you do TDD right test-driven development is the process of first creating the test and then writing the production code so anyways that's why we do TDD at pivotal and that's what we help our props our clients understand all right so let's start talking about some problems that we run into and seen in the wild so a first problem is outside in BDD who here practice is outside and BDD yeah it's cool it's not actually a problem but if Miss applied it's perfectly pretty problematic so let me just show you an example of what that might look like it often starts with first write a failing feature test right this would be like an acceptance test that tests the system as a black box right from the outside looking in and pokes and prods it in certain ways and then sees whether or not it spits out certain things right and puts the application into some kind of new state and then maybe from there you start diving lower and writing lower level tests right and those start failing and maybe they lead you to write another lower level telling test and eventually you get to the bottom right and at the bottom of the stack you finally get a test to pass and then you start working your way back up the stack so even though it only looks like kind of two circles here it's actually a lot of circles often sometimes when you see teams try to apply BDD they'll do something like this though first write an acceptance test that we just talked about and they're like great it doesn't work so now let's go write our controller test great that doesn't work now let's go write our model tests I hate naming their database models that they're interacting with okay now we got those working now let's go back out let's get the controller test work now let's go back out and let's make sure the acceptance test works and they do this over and over and over and over again and so when you look at the test Suites that they've created one thing that almost you can always observe is that these test Suites are slow and they get slower and slower and slower over time they often are very brittle it in such that if you make one change the test over here lots of different things start breaking right and you you feel like you have to kind of massage a bunch of stuff together just to keep it all working at the same time when you look at the code bases the actual production code not the test suite that often very tightly coupled all right it's it's it's a it's it's often that the high-level policy in the low-level declaw like mush together and stuff like that and everything's hyper integrated which makes it hard to maintain that code base over time the test Suites often are flaky to write that over a lie on this kind of acceptance testing you know like a flaky test if you don't know is it a test that passes one second it fails the next and then passes again the next second right it's it's just like intermittently read for bad reasons not for actual real reasons the the there's the thing called a testing pyramid and a classic kind of ideal that many teams try to strive for what the testing pyramid is that you have very few kind of high-level feature tests kind of at the top of this pyramid maybe a slightly larger number of sort of acceptance a server or somewhere in between tests that you integrate a few things together and then a large number of unit tests right that compromise the bulk of your test Sweden and the bulk of where you get your confidence from but when you see teams miss apply BDD they often end up with testing pyramids that look like this almost everything is tested through the browser or through the simulator whatever kind of system they're trying to build and they have very few actual unit tests so there's this idea that has I guess the real question has become sort of what does the outside meet and then who here has heard of cucumber newcomer yeah a lot of people have heard of that so one of the early and kind of influential builders of cucumber this guy named Joseph woke he noticed this problem at some point and he said that that it's a shame that people have conflated outside with GUI right or outside with user interface because what what they actually meant was outside in behavior driven development I mean start at the outside of what you want to discover right and it's not often the user interface that needs discovery right often that's that's much more well understood it's more of the business rules right that you need to tease out and you need to actually work through because anyone who's done software development for very long knows that it's in the doing of the work you discover that work that must be done right and so as you get in there and you start actually trying to express these this logic in these business rules and you have this back and forth with your product managers and your business analysts and people kind of outside your team and your users right that's when you discover all the interesting stuff all the stuff that's unique is your domain so I think it one important takeaways don't necessarily can play outside with GUI that doesn't mean that it isn't it's never the GUI that you want to start at but it shouldn't it shouldn't be the default okay there's a almost an opposite problem that we've seen sometimes so some teams exhibit the first problem where they've Mis applied outside and BDD other teams exhibit another problem where that Mis applied mocking who here knows what mocking means yeah okay perfect okay let's look at this bit of code here if you've done some spring this should be really obvious right like we're in some kind some inside some kind of controller we have a show method here and imagine that you have some translation service that translates incoming words into pig latin okay all right so we have a very simple method here that basically just says whatever the string is make it the response body teams that have misapplied mocking might create tests that look like this right they start by creating a mock translation service and then they mock methods on that translation service and then they spin up the controller and these mock NBC to test that and then they run the post actually on the mock NBC and expect that what the response that they get ends up becoming that thing that they mocked out right and teams that have misapplied mocking kind of or cranked up that knob too far you'll see this at all levels of their testing right like wherever we are let's just mock everything out and on the one hand their tests are incredibly fast right and that's cool that's really cool but on the other hand they're their tests are almost entirely meaningless right they don't actually test behavior they test implementation and if the implementation needs a change even though it it wouldn't necessarily change the behavior the tests start breaking right any time you're in a situation where your tests break even though the behavior hasn't changed all you've done is we're factor the implementation you're you're witnessing coupling okay so that I'm not trying to say that mocking is wrong mocking is totally thing that we need to do at at various times and there's actually all kinds of different mocks that we have to educate ourselves about mock is actually a slang word by the way the real name for all these are test doubles and there's five times there's dummies and stubs and spies and mocks and fakes stubs are a type of dummy and spies are type of stub mocks are a type of spy and then fix or something completely different you have to know what these are and and before you can even start to figure out like when would I use some of them if you don't know what they are start by reading this blog post that Uncle Bob rode it's really concise it's a super concise blog post that will very quickly give you the information you need to understand what all of these different test doubles are and it will start to give you some foreigners around when you might start to use them but if you really want to dive into when you need to start using them you have to understand the dependency inversion principle and the other does the solid principles too so if you don't know what solid is right it's a series of design principles that we can adhere to when we're building code to help us create maintainable code and they apply regardless of whether you're building object-oriented software or whether using a functional language these are design principles that are paradigm agnostic so if you don't know what those are feel free to check out clean coders comm there's videos on there a whole video series about these principles or if you like to read books there's a great book called a software development principles patterns and practices you can work with pivotal labs because we know this stuff and will pair with you and we'll just do it another thing you could check out is the screencast series that I created this is a shameless plug and it's called hexagonal TDD and it will there's only three screencast and they're all pretty short so you can watch those too and it'll introduce you to some of these concepts and talk about when you would use these testicles how you use them at boundaries of your system at the boundaries of a component within your system how you start to think about the components of your system so anyways miss applying mocking often happens because there's not only a misunderstanding of what mocks are and a misunderstanding of what all the different types of marks are but there's also a misunderstanding of when they're appropriate and when they're not sorry okay um problem number three unit testing who here does the unit testing yeah unit testing is my favorite unit testing in and of itself is not a problem but the misapplication of it is so let's talk about the Three Laws of TDD who knows these laws already okay this will be good so the first law is you are not allowed to write any production code unless it is to make a failing unit test pass okay the second law is you're not allowed to write any more of unit tests than is sufficient to fail not compiling is not is failing as well and three you are not allowed to write any more production code than is sufficient to make the one failing unit to pass the one failing unit test okay right that that locks you into this sort of very fast cycle of TDD right let me a thirty-second a minute long cycle we're constantly writing a little bit of a test watching it fail writing a little bit of production code making it pass for factoring you know kind of cleaned up and lather rinse repeat around and around and around you can go in that cycle it's a really good thing these three laws of TDD are fantastic however over time they have been I think extrapolated into something that is less useful so there are a lot of people out there that practice TDD that would believe the statement to be true and would tell other people's that this is TDD every class you'd be paired with a well designed unit test and some take it even further and say every public method of every class should be paired with the well design unit test and so this is really the problem this is this leads to this very problematic in this application of unit testing you know in what that might look like is literally every single public method of every single class gets a test right and you end up with class 1 Java has a class 1 test class 2 Java has the last two tests and then maybe you decide you need to refactor some of class 1 right and and you're going to use the design pattern underneath in the implementation so you're gonna hold the behavior constant but you're gonna refractor the implementation people that conform to this kind of miss miss understanding of TDD will not only refactor the implementation they'll change the tests right so they'll they'll go back and start moving the tests around to be like well I guess I need to make a new test class over here for this new class I created even though the behavior isn't really changing because I'm supposed to make sure that every single class and every single public method of every single class has a well designed unit test for it the the problem that you run into with this is coupling it's not the kind of coupling that we were seeing with mocking it's it's a different kind of coupling it's a coupling in which your tests become aware of the design patterns you were using underneath the hood in your code base right and the design patterns you didn't decide up front you were factored into those design patterns but now by coupling your tests to those design patterns you have made it harder to refactor away from those design patterns winning when the time comes because there will be a time when those design patterns no longer apply and you need to use a different design right when your test stepped away from I'm just going to test behavior to I'm going to know about every single class and every single public method of every single class you put yourself in a position where you're making it harder to refactor your code and keep it clean right and so what happens when you do that well we're factoring becomes harder your code does not stay clean and you stop going fast forever so instead of saying every class should be paired with the well-designed unit test what if instead we said every behavior should be paired with the well-designed unit tests that allow us to minimize the surface area of our test suite right we would be very intentional about what our test suite interacted with right what are the boundaries of this component that we're testing what are the entry points that our test suite will exercise and then how do we allow the stuff inside those boundaries to evolve over time how do we how do we allow ourselves to refactor it with impunity without having to change the tests right and that's really the goal and that's how you go fast forever so this company still exists today they didn't they weren't destroyed but they were so close to being destroyed by code in fact they were very close to being killed by TDD so all I'm really asking me to do is not kill your companies with TDD and instead deliver on the promise of TDD by actively honing the craft and actively reflecting on what's working and what's not and talk with other people in your community and in your company and constantly adjust it and think about some of these kind of misunderstandings that we've built up over time and some of our communities so anyways thank you I think I have I still have eight minutes for questions so if anybody want to ask a question now that'd be great otherwise we can talk outside so the question is a lot of organizations build test coverage as part of their KPIs and they measure teams on how much test coverage there is and you know I've seen I seem that both work well and I've seen that also be disastrous right like it is easy to make test coverage 100% without actually testing anything useful right and and depending on what kind of organization you have and depending on how the incentives work out in that organization my find that developers do the wrong thing to to gain that system that being said I've also seen it applied quite well and effectively right there are other organizations where there was a belief in a buy-in that yes we should be practicing TDD and if we just start practicing TDD this number will change over time and we're just going to measure that and honor to that and keep track of that I think well I don't know that's not really an answer but it is possible to use test coverage effectively and teams that do TDD um will end up with very high amounts of test coverage it may not be a hundred percent it's great if it can be right but maybe there are things that kind of the cost of testing it versus the actual benefit of getting a test around that doesn't doesn't pay off and so when you when you put teams in a position where they're effective enough and I'm mature enough to be able to understand and make that kind of trade-off right that's when you know you're winning yeah yeah totally so in fact in one of those links there you go in that one if you go here there's there's not only a screencast series in which we actively build code to I think more effectively practice TDD and design the code base but there's also code you can access on github as well that go along with it and we can talk afterwards too there's another open-source code base I could point you to as well that might be helpful yeah so I yes the answer is yes so let's sorry the question I'm sorry I'm supposed to repeat the questions for the video the question is do we have standards on when we implement integration test or contract us so let me let me be clear on what I'm going to mean by the definitions of those terms contract tests I typically use for a collaborator that I'm going to inject at a boundary and yet which I know I'm dictating some very well-defined set of expectations about the behavior of anyone pretending to be that collaborator right and so maybe that collaborator is a fake in memory repository that I happen to use in the test right to test this other component over here or maybe that collaborator is a real persistence layer right that actually integrates with a database or sends data across a wire to a micro service right like there could be all kinds of different implementations that collaborator and yet they all have the same behavior or contract right in fact if you if you watch this you can you can see the process of making a contract like that and then and then implementing it both for a fake and a real implementation of the thing and I think I think that will do a better job maybe of explaining how I think about that then I could right now at this moment and the other interesting thing and this is a observation that Dan north-north is the creator of BDD or at least he coined that term many years ago he's a fantastic speaker he gave a really great talk at craft Kampf called microservices software that fits in your head I highly recommend watching this talk it's it's slightly sneaky in that he's not actually talking about my services but it's he makes this fantastic observation that when you start building components like these loosely coupled components right and you start designing these modular systems and the way potentially described in and this and the way you'll see in some other things that all these different types of testing that we talk about right like contract testing integration testing acceptance testing feature testing unit testing right like they all disappear and they all collapse into one type of testing component testing that's this really amazing observation that he had where he talks about how as we mature the way we think about design our software and our systems it's simplifying our process of testing software and the cognitive burden that we built up over time and trying to figure out how to test poorly designed software any other questions yeah can you can you talk are you talking about like in an ID where you're in a class and you type make tests and it generates a test for you for that class yeah totally and you can switch back and forth yeah that is fantastic that's super convenient IDs like IntelliJ all the JetBrains IDs do this pretty much any little languages when you don't have that when you have these set of behavioral tests that don't necessarily map one-to-one to a specific object but maybe it maps to this func this method over here which may underneath the hood use a series of objects and it's a sine pattern to satisfy the behavior and you do run it in the situation where you're like hmm what is this testing um or you may be in the production code and you may say like I wonder what tests this yeah yeah yeah yeah which test covers this yeah that's great that's a great question the easiest way to figure that out is break one of the lines of code in the production code and run the tests and see which test breaks they totally worked like if somebody has actually been doing TDD properly right like you will find that broken test really quickly right and if they've been doing it well - it won't take that long to figure it out because I have this really fast effective test suite and if they've been doing it well you're only inside a single component so only you need to run or the test for that component yes any other questions ok cool now we're out of time anyway so thank you
Info
Channel: VMware Tanzu
Views: 42,404
Rating: 4.7429304 out of 5
Keywords: pivotal software, agile development, pivotal labs, cloud native, pivotal cloud foundry, TDD, software testing, software development, test driven development
Id: xPL84vvLwXA
Channel Id: undefined
Length: 30min 16sec (1816 seconds)
Published: Fri Aug 19 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.