When Test Driven Development Goes Wrong

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
how come that there are people that still don't practice test driven development when the people that do say that it's the best way to write high quality code maybe people are wary of test driven development because there are some common traps that are easy to fall into when they first start but what we soon learn is that these traps actually are the signposts towards better design so what does test driven development look like when it goes wrong and how does it help us to write better [Music] software hi i'm dave farley of continuous delivery welcome to my youtube channel if you haven't already please hit subscribe and if you enjoy the video at the end hit like in this episode i want to describe five common anti-patterns in test driven development i think that test driven development is one of the more important advances in software development that's happened during the course of my career but it hasn't taken off in the way that i expected to when i first learned about it it is widely practiced in all sorts of industries with all sorts of different technologies but it's not the starting point for every team which i think it should be i wish that more teams practiced it and saw its benefits perhaps one of the reasons that it hasn't become ubiquitous is that it's easy to make some common mistakes that mean it doesn't work too well so i thought that it may be interesting to look at some of these common mistakes this isn't poking fun at people for getting it wrong we all get it wrong sometimes i would argue that being able to spot early in the development process when we're heading off course when we're beginning to get things wrong is one of the main benefits of test driven development and of good software engineering in general if you'd like to learn more about engineering for software i have some training courses that may be of some help i also have a new book out called continuous delivery pipelines which is a practical guide to how to get started or how to improve your deployment pipelines if you want great feedback from your test driven development there is no better way to getting it than from an effective deployment pipeline links to all of these things in the description below test driven development gives or at least should give us feedback on the quality of our design more than any other practice that i know it does this this is the reason that i value it quite so highly when test driven development works we establish a fast efficient feedback loop that gives us a valuable insight into our design if our tests are difficult to write that isn't a problem with the test it's telling us something important about our design and if we listen for these signals we can design better quality code as a result not just in the simplistic sense that we've tested it and so know that it works but in the deeper more durable technical sense of more modular more cohesive code with better separation of concerns and appropriate levels of coupling that allow us to maintain and evolve this code over long periods of time so what are some of the signals to watch out for what does it look like when test driven development is starting to go wrong here are five common anti-patterns before i begin let me give a shout out to james carr's blog post on test driven development anti-patterns which are used as a jumping-off point for this video the first of these anti-patterns is the liar the liar passes all tests but doesn't actually assert anything useful the cause for this common anti-pattern is is it's often seen in teams that are trying to chase test coverage where somebody higher up in the hierarchy has mandated some level of coverage and so we build tests that don't actually test anything i once worked for an organization that set a target of 80 test coverage and incentivize people with bonuses if they hit the target guess what they hit the target and their tests didn't assert anything at all so people were paying people to write tests that didn't provide any value this is commonly seen in teams that don't practice test first red green refactor test driven development the problem with this is it gives you the illusion of coverage you think that you are safe when you are not it's also obviously waste we are paying people to create tests that have no value whatsoever the correction here is delete the tests with no coverage or possibly add some assertions practice test first and it's hard to do this if we're doing test driven development properly we're going to write a test run it and see it fail in order to be able for it to fail we have to have an assertion of some kind and when we do that at the point at which we're going to write our test before we have any code we're going to predict the way in which it's going to fail so it needs to fail in a specific way that meets our goals so it's very very hard to fall into this trap if we follow test first test driven development the next in the list is excessive setup this is an incredibly common pattern in automated tests in general and unit tests nominally referred to as test driven development in particular this is a test that requires a lot of work to get the code ready to test the cause for this is often the result of writing the code before you wrote the test you weren't thinking of the design of your code to make it testable at the point at which you wrote the test you were just thinking about how to solve the problem that the code is focused on this makes it hard to test it indicates a poor separation of concern in the code under test so if we see this signal we should be starting to think about our design the problem with this is that the test and the code become very highly coupled and so very fragile it's very easy that over time projects that apply this kind of approach will gradually slowly grind to a halt it's almost impossible to make any move without breaking a test every time you change the code and that's clearly not a pleasant way to work it's inflexible it's because the tests are so tightly copied to the code under test and it's difficult to understand what the test is trying to achieve because again it is largely focused on being so tightly coupled to the system under test that it's really just testing the implementation rather than the desirable behavior of the system so it's hard to understand and hard to maintain as a result the correction for this when you see this signal is to start improving the abstraction in the system under test start carving the system up into smaller more modular pieces that are more focused on achieving some desirable outcome that is testable improve the separation of concerns think about teasing these things apart think about one class one thing one method one thing and test driven development strongly assists in this kind of thinking practicing test first writing the test before you write the code forces you to simplify the step setup because you'd have to be kind of insane to write a test before you'd written any code that required you to do complex setup you're going to try and make your life easy by writing simpler code here's an example of some excessive setup here's a test from a real world project in this case it's evaluating a behavior of of a build management system in this case jenkins for this test it's starting up jenkins all of it it's configuring an environment in which i can run jenkins starting it off and then the test is asserting that we don't have the word localhost in the url for the test think about the cost of that test creating it writing it running it to be able to assert that the word localhost doesn't have occur in the url this is waste you would never in a million years write a test like this before you'd written the code this is clearly a test that was written after the fact if i was approaching this from a test driven development approach i'd probably write something more like this i'd write a test that he asserted that my url didn't have a localhost in it and then i would write some code that fulfilled the needs of that test and gave me the result that i was looking for this is simple i can run this as a unit test rather than having to start up the whole application to test a string whether a string contains a word third in my list of anti-patterns the giant a test that is comprised of many many lines of code and makes multiple assertions the causes of this is almost inevitably this is caused by writing the test after you wrote the code once again it's not the the outcome of test driven development very often these are often implemented consciously as kind of component level tests intended to test some bigger chunks of software um but nevertheless it's still problematic the problem is that the intent in these tests is very hard to determine it doesn't really tell you what's going on we tend to have built these tests and grown them organically iteratively [Music] just as a kind of convenience rather than a focused evaluation of some specific behavior that we're interested in these sorts of tests don't really document the behavior of the system that they're testing in any sensible useful way they tend to be once again highly coupled to the system that they are testing because they're written after the fact and so they tend to be very fragile again the outcome of this is over time projects that apply this kind of approach to automated testing tend to grind to a halt they get to a point where they can't move forwards without invalid they can't change the code without invalidating the test they can't change the test without invalidating the code and and they're just stuck the correction for this is to decompose the test into multiple test cases that are focused on testing and asserting one thing make one assertion per tests prefer tests that look like this rather than this here's an example again from the wild this is this is a real test in a real project and in this test you can see this doesn't really look like the second version of pattern that i recommend more like the first it's big and complicated in this single test case there are nine different assertions if this test fails on the seventh or eighth assertion what does that mean does that mean that's what went wrong or does that mean something went wrong earlier on but this is just a side effect of that earlier failure we don't know figuring out how that test went wrong is going to be difficult and complex and involved quite a lot of work to uncover what's really happening if we get to the really simple picture of one assertion per test the test is effectively shouting at us what went wrong very clearly very precisely when it fails the fourth in my list is the mockery and the name of this is kind of mildly amusing and this pattern is kind of mildly amusing too or anti-pattern i should say this is a test case that uses so many mocks or stubs that the system under test is not tested at all in fact in some cases there is no system under test there are only marks the causes for this are again down to the problems of excessive setup see the excessive set up pattern earlier on if we are struggling to be able to establish our system in order to be able to test it then that tends to we're so focused on trying to achieve that that we tend to lose the focus on the target of what it is that we're really trying to achieve this is down to once again poor coupling in the system on the test testing by wrote developers working on building tests just because somebody else told them to do it rather than seeing it as a valuable tool to assist their own work the problems here are that it doesn't actually do anything at all we write a test that does nothing at all i suppose that we could say that the test is not readable it's poor as documentation of the intent it's highly coupled and so very fragile over time again the projects tend to grind to a halt as a result of this kind of over coupling it's difficult to understand and maintain what's going on but i repeat it doesn't do anything at all here's a mockery example uh here's his a he's a he's a test case it looks reasonable but here's the setup for this test case and in this if we look carefully at this both the car and the engine are mucked out and those are the only things that we're testing so basically all we have here in terms of production code are two interfaces with no implementation and that's what the test then evaluates this is just waste it's just coupling for no reason this is a waste of time a waste of effort a waste of cpu cycles when we run it the correction for these kinds of problems is to review the design of the system under test start thinking about testing as a tool to achieve some outcome rather than a chore that you're forced to just implement because somebody else told you to do it improve the abstraction and the separation of concerns in your design is going to reduce that horrible setup overhead that tends to lure you into these kinds of ridiculous situations practice test first development because again that's going to tend to keep you sane you're not going to write those complex setups those excessive setups if you're writing the test first because you're going to very quickly say why am i doing all of this work just to test this thing maybe think a bit more carefully about what it is that you are trying to test focus on the behavior that you desire from your software rather than the technical implementation of it the last in my list for today is the inspector this is a test case that violates encapsulation in order to make assertions the causes of this is it's nearly always the result of writing the code before you wrote the test striving for 100 test coverage is also a common cause of this kind of failure i think that test coverage is a very poor metric for success teams that practice effective testing development do get great test coverage but using test coverage as a target tends to bring out the wrong behaviors in teams if they're not focused on what the real value to them is and the real value to them is that test driven development makes their designs better and their work easier it's not about coverage it's about test driven development as a tool to evolve better designs the other cause for this kind of failure is no or improper use of dependency injection can also result in this kind of failure tight coupling between tests and the system under test is another one of these of the problems that kind of comes with this they are these systems tend to be so highly coupled and so very fragile that over time once again projects tend to slow down and grind to a halt i've repeated that phrase several times in this video that's because that's one of the common objections to test driven development and from my perspective what i've seen in my career and my practice of test driven development is that that's a symptom of many of these cases and often not really a failure of test driven development it's a problem of after the fact unit testing which is a very different thing the correction to this is to never compromise the encapsulation of your system to support testing instead design code that is testable from the outset here's another example this is an example of again taken from a real code base a real open source code base and there's a bunch of stuff here we're testing some stuff we're looking at some various various attributes of this code it's a fairly unpleasant test as we can see there's a whole bunch of complicated setup to get this into a state where we can do some work and then one of the assertions is this and this as far as i could find searching the code base this method does not exist anywhere in the production code the only place where this method is ever used is here in this test this method was added to the code in order to support this test again i think that you wouldn't do that if you were doing writing the test first so i think this is clearly a test that was written afterwards i think that you wouldn't do that if you were trying to avoid coupling with the code on the test you certainly wouldn't do this if you were focusing on trying to assert the behavior of the code that you are testing so in order to cure this kind of problem we need to improve the abstraction in the system under test to make our system more testable improve the separation of concerns in the system on the test so that the components of the system are more independently testable of one another in the context of development and test driven development we're going to use mocks or stubs to provide measurement points points where we can gather information that tell us about what's happening in the code that is much preferred to um to to breaking encapsulation and digging in and looking at internal representations or internal private state try to write tests that test the desirable behavior of the code rather than its implementation as i said these anti-patterns aren't just failures these are the tools that we can use to highlight the direction that our design is taking there are lots more of these patterns if you recognize any of them if you've got some other alternatives please let me know in the comments section below thank you very much for [Music] watching [Music] you
Info
Channel: Continuous Delivery
Views: 64,559
Rating: undefined out of 5
Keywords: test driven development, tdd, test driven development tutorial, test driven design, tdd vs bdd, what is test driven development, test-driven development, test driven development java, devops, Continuous Delivery, software engineering, software development, Dave Farley, software testing
Id: UWtEVKVPBQ0
Channel Id: undefined
Length: 21min 10sec (1270 seconds)
Published: Wed Feb 03 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.