JS Unit Testing Good Practices & Horrible Mistakes • Roy Osherove • GOTO 2013

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to the talk about unit testing best practices and horrible mistakes or as my American friends like to say good practices and possible mistakes because they don't like to hurt other people's feelings and I agree I agree most of the things we'll talk about our possible mistakes they're only a single specific point of view but I'll try to explain as most as possible why do I think these things are possible mistakes a little bit about me and I I wrote a book in 2006 called the art of unit testing it's going to come out in the second edition and when I started that book I had zero kids but when I finished that book I had two kids and by the time I finished the book it took three years I realized just how wrong I was and how the things I wrote in the book are already wrong so I had to rewrite some of it but for the second edition basically I'm going to I actually went to Goodreads and I I marked my book as three stars because I think some of it is wrong so I hope the second edition will fix some of that I'm actually in the process of writing another book called beautiful builds and I just have book about notes of software team leader published but most of all if you have any questions or things after this talk you can feel free to just ask me via Twitter or just contact me in some way after the after the presentation and I want to say that everything that you're going to hear are based on my actual failures so I failed quite a bit I think Chad Fowler would have been proud and a lot of those failures taught me so much that today in projects I can kind of get the projects to work and have unit testing actually succeeds a success succeed in our debt in our project I'm quite nervous this is quite a big crowd I was a joke now the first mistake is that people believe that just by doing it unit testing will make your life easier you hear it in everyone who talks about unit testing you're in testing is amazing unit testing will make it easier to find bugs so imagine that when you have 4000 unit tests in your code it's easier to find bugs well that's absolutely wrong most of the time what happens when I come to projects and do consulting is that the tests themselves have bugs the tests pass when they shouldn't be passing they fail when they shouldn't be failing and so there is no trust in the tests so how do you make sure that your tests don't have bugs you write the tests for your tests how does the test for your tests how do you know that doesn't have bugs so writing a test for the tests for a test doesn't work I can tell you that now one way to solve it would be test-driven development in which we see the test fail and then we see it passed without touching the test so we're actually testing the test this way but most projects it's actually not easier to find bugs unless you pay close attention to what the hell is going on in the code and the tests and we'll see how to fix that another thing that everyone says is that it's easier to maintain your code when you have something like a thousand unit tests four thousand unit tests and again that's absolutely not true because the tests if they test a lot of internal implementation what you end up with is that you refactor your code and then suddenly you have tests breaking even though they should actually be passing if the code still works and so you have to go and start changing the tests at some point you start saying you know what this isn't worth my time and then you start removing tests or actually commenting them out or putting ignore attributes on them so that's not necessarily true and I would say that most projects that fail with unit tests it's because of the maintainability issues one of the first projects that I've had we had to delete some of the our tests after just a few months of code it's easier to understand your code when you have a thousand to four thousand tests well I know that you know that that's not true for most of the tests that you've seen most people especially when they start out with tests what they see and what they what they think is that the test code is not as important so they just give up names like ah just test one or test a door test whatever and then the test code itself is just up no it just assert on on the thing and and be done with it and then the tests end up just like you treated them which is they don't help in terms of readability so we'll talk about readability issues today as well but only when these three things are true it's actually easier to develop your software when you have unit tests and one and most of the reasons that to do with why companies do not adopt unit testing into the organization is because they're failing with one of those at least and then that affects the other two that creates a very big chain reaction the second big mistake is that once you've started doing unit tests you do you do not do any test reviews or code reviews so some companies do actually do code reviews but to most developers that I know do not do test reviews so they review the code but only some of it and only it's specific times but the point of a code review and a test review is that you teach and you learn about practices so you might have one person who really knows how to write good unit tests in the team and that person is just writing there he's just sitting there coding awesome tests but then everyone else around him is just create a pile of manure around so when you come and see the test you see a good test and then right below it a test that has no business being there and there is very little communication about the quality of the tests so test review is one of the best ways to actually institute code reviews because if you think about it tests are really mini use cases of your code that you can drill into and you can actually understand very very quickly whether the developer actually understood the requirement or not sometimes just by looking at the name of the tests you can actually understand that the developer did understand the requirement you don't even have to drill into the code you don't have to read 50 lines to say oh now you're doing it all wrong sometimes if the name is good enough then you've saved all that time and when you do code reviews and test reviews it has to be in person you cannot use a tool if you use if you use github to do reviews of the code and all you do is in comments yes that's good but it's nowhere nearly as powerful as pair programming with that person or actually even doing remote desktop or anything like that where that person can see the expression of your face where you can hear the tone of their voice a lot of information is lost if you ask the person something and they hesitate that's something that doesn't go through in comments if they keep fixing their answer while saying it that doesn't go through when they write text you only see the end result so you can tell where there's things that they're weak their knowledge is weak on and you can help them learn something or even better you learn something from them why why did you do that that's something that maybe you wouldn't have written in the comment on github because it was just a little thing and it has nothing to do with the current code review but you just saw that and he said and you would have liked to say hey what was that little thing you just did or that you wrote there but that's not about what you're doing and then you don't write it in the comment and maybe it's too large to start writing the text about it in the comments and so a whole opportunity for learning something new is lost a new technique if your team is learning and not doing code reviews and test reviews I would highly suggest that you start doing it if you decide to do it do it on all the code everything no broken windows no line of code should go unreviewed even if it's a single line of XML somewhere should get reviewed no broken windows because if you do you up the bar and then tomorrow some said oh it's just one method it doesn't really need to get reviewed but if you're trying to teach your team how to do unit testing that's one of the best ways now when the thing I want to talk about they have to do what with those three pillars of good unit tests that they're trustworthy they're maintainable and they're readable if you don't trust your tests and you see them fail or pass you might still want to go in debug your code if they pass or if they fail you might still say no it's okay and one of the points of trusting your test is that you should be worried when they fail so someone said oh that's RTF but just make them RTM right RTM tests but if you just make them fast it will be RTFM and then everything is great and I thought maybe I should add the faster than later okay one big mistake is that people use mock objects and stubs for everything when instead they should actually be using them very very little so I would go far and say that maybe 5% of your code on your test should have any kind mock object and when I say mock object I don't mean a stub and the difference between a mock object in the stub is that a mock object is something that will verify you will verify against at the end of the test and a stub it's just something that makes happy noises you might have multiple stubs in the test but you could have really only a single mock object because it tests a single end result let me be a little more clear about that when we're talking about unit tests when I say the word unit in a unit test what I mean is a unit of work I don't mean a method I don't mean a class or anything like that what I mean is that there could be multiple methods or even multiple classes running in memory and they get invoked by some public API somewhere but then the testing looks a public API and then many things happen in memory they do not touch the file system or a database or anything like that and then at the end there is either one of three possible end results you might get back a value right methods that return something you might have a void method and then the void method has to do something as well write void methods don't return values but why would you call them they must be doing something and that is usually to change the state of the system and noticeable state change in the system and the third type of end result is when they don't change the state and they don't return their value why would you have such methods when you forward calls to a third party like a some kind of logger or web api and only in that third type of end result do mock objects actually make any sense that's the only time where you have a fire-and-forget into an external framework when you try to test on third party without a mock object there is very little way to check that you call Twitter the right way etc so if you try to assert against it it would be really really difficult so instead what we will do is we will do will have a mock object that looks like the Twitter either not Twitter API but a facade that abstracts it away and if that was called everything is good that's the only time where mock objects actually makes sense but since the beginning of mock objects in your in testing people have been abusing and using them much more just because they could and that has then created a lot of tests that are very brittle very hard to maintain that would break on every little change to every internal implementation in the code so 5% is may be the amount of calls that you have to externally api's in your in your code that's how many mock objects you should have but remember this is only a fire-and-forget scenario if you get fired something and you get something back that doesn't always mean it's a mock object it still be a stub it depends on the flow of information so only if the end absolute end result is calling a third party it makes sense to use a mock object now another very very common mistake that I see is that people say mock this mock that everything is the word mock for for everyone but there should be a differentiation now if you read the book xqe test patterns you've seen all these things tests by testable fake marks tub and all these things mean something for all these some things that look like other somethings and that's a very very complicated thing especially for someone who's relatively new to this field to understand and what I find is that if I use a more simpler terminology and I don't use all these I just have three things that I talk about I have fakes fake is everything that looks like something else and then whether if I assert against it then it's a mock if I don't assert against it it's a stub and that's the only differentiation if you take this coarse grain grain grain the differentiation you can start talking about the difference between mocks and stubs and not call everything a mock instead of saying I'm going to mock this I'm going to mark that usually what you mean is I'm going to stub this I'm going to stub that the only time you should say I'm going to mock this is when you are expecting that this to be called but even frameworks use the word mock too much and mock is very very overloaded and again the differentiation is you only want to mock one mark per test as you're testing one thing you could have multiple stubs so you need to know the difference third state based one of the nice things that happens is that if you don't use mock objects you're getting into very simple test territory sometimes it's just invoking method and then checking that some state has changed but you can even get that a bit prayer into some problems for example you might be checking that an internal state has been changed so imagine that white line bringing the boundary of your API and the test that invokes the public API should only be asserting on an API at the same level no level deeper because if you do you're bound to get weird things for example look at this test adds a user in memory creates a user and then checks that there is an internal field somewhere that has the user in memory now this test will ass but it doesn't mean that the code works because you're checking internal implementation what's going to end up is that maybe the the other methods that have to use that user are using it wrong or they're using a different field so even though private implementation works doesn't mean that the unit of work actually works from the same level of the public API this is a better implementation of the same test if you could do that create a user manager add a user and then check that you can log in with the new user so noticeable state difference in this case is that the behavior of the object or the system under test is different than it was before you changed the state either a method returns a different value or a different method returns a different value but it's still at the same level of the API it's not checking internal implementations one of the biggest problems that I see and I talked about this is that that people do not trust their tests and one of the biggest reasons for that is that people mix unit and integration tests together in the same suite and they run them all together and then integration tests are usually much more coupled to the current implementation of the local machine for example they need some kind of configuration and what do we want we want to determine that in this case unit tests are are lightweight tests they're repeatable you can easily rerun them with no reconfiguration they're fast because they don't touch any kind of i/o hopefully they're consistent they're always the same tests so if they're using date/time the current day time in the test every time you run them they're actually a different test they might return different results if they're using random numbers every time you run them they're a different test and the point of these lightweight tests is not to make sure that everything everywhere works but to make sure that you have a baseline of working software you can then fill all the rest with integration tests and there they should be easy to write and read usually integration tests need a lot of configuration and you see a lot of magic like where did that come from I have no idea how did you know that that thing actually exists somewhere so to accomplish all these things tests need to have full control over their dependencies that means that if you have some kind of third party you create either a stub or a mock object and they should really be in memory so that they can be fully consistent I'm going to give you an example from an open source project called open layer when I got the latest version of open layer and I just ran all the tests this is what happened you can see a lot of green but then you see some Reds and the problem with this is you would expect a project that has tests that all the tests would be passing so imagine a programmer in your team that's getting the latest version of your source code then they run all the tests and some of them are failing I'm sure that has never happened to any one of you in this room but it has happened to me in some teams and then what's the worst thing that you can hear someone say when they ask why those tests are failing don't worry about it it's ok it's just a configuration issue it's ok and then those red tests become just an annoyance instead of something to worry about and the big problem here is that you don't know which tests are integration tests and which tests are unit tests or lightweight tests if we had a separation into a different project we could have easily said that we have a safe Green Zone test that should always be green no matter what and then we have some confidence and if one of them fails we should actually be worried but because they're mixed we tend to stop doing whatever configuration we need to just get those things to pass because we have better things to do with our time so by separating we at least give some sort of confidence in some sort of safety and more trust at least in some of the tests as a group jQuery has the same problem when you get the latest version of jQuery and you run all the tests by default you get this well actually you get this but if you run it again take seven minutes by the way if you run it again you get this okay so it was 137 now it's 136 failed what the hell does that mean what am I supposed to think does it mean that jQuery has bugs probably not what it does mean is that if you start digging around and start to look at all the all the tests that are failing you're going to start saying things like this oh you require PHP and the server running locally to be able to run the Ajax related tests makes sense but then these are not unit tests but all those tests that require PHP are in the unit folder of the tests and there is no way to easily run just the unit test of jQuery so I had so I had to start looking around and of course I thought does that make sense that you have to have PHP running and of course you know Google will tell you yes you need PHP installed etc to to get jQuery working not just installed you need the server running with PHP and those files being served up and again you have the trust problem if I wanted to do development this is an open source project so I'm comfortable with the fact that I have to set everything up and make sure that everything is working before I do the first thing but if this was a team if this was a project in a team in a company I would not expect that there to be that kind of separation of non-separation I would expect to at least have a safe green zone that says these are all the unit tests that should be passing no matter what no matter what if we want to trust our tests we have to remove logic from our tests so this is an example of logic again these are tests from jQuery this is a test that actually generates a very very string and it has a loop and obviously that seems very very innocent but the point is that this is a broken window someone else looks at this and say okay it's okay to write loops right and then and then later what's the point of logic is that if your tests have bugs it's very hard to find any piece of logic in your tests is destined at some point to possibly be to have a bug in it so I want to avoid that test should just be unit test should just be very very clear simple statements I can save all the loops and the random numbers and all that stuff and I put them in a special folder called integration tests there's all the things that either require logic and or anything else by the way there's something else interesting here is that if you look at the top of the test here there is a number and that's the number of a bug that they fixed because there was a problem and when I have those bug fixes especially when it's hard to recreate or they require special special attention what I like to do is that I separate those tests that recreate bugs to another special folder called bug fixing or something like that and in that special folder you can have all these crazy animals because obviously that's the only way or that's one of the only ways you can recreate this bug you have to have a really long string you might treat it from a file you might generate it you might have it in memory you still need that long string and it's still going to either hurt readability or the maintainability or the trust of the tests so I put them in a special folder and then all those tests you're allowed in a way to have these broken windows because that's a different building with different windows but the safe Green Zone and the lightweight test should not have any broken windows here's another example from a different test this is checking about the percentages of working with CSS and different browsers they even put a comment have to verify this as the result depends upon the support that the browser has for font size percentages if you have to put the comment in its first of all it probably should be a different test with a good name but second that if could also have a bug right it looks really really innocent but even the innocence stuff I'm sure if you've had more than three days experience as a developer you know that even three lines of code that look very innocent might have a very uh ninna --scent bug one of the things that I that I saw in a lot of frameworks in the open source frameworks tests is that there is a lot of forced assertion counts so a lot of the frameworks especially qunit in this case they support the idea that you can say how many asserts you're going to have in the following tests and then if some of the asserts don't execute your tests will break and say no no you told me there would be five asserts but there were more or less and that's to me I don't think that's a big problem but it's definitely kind of a maintainability issue because I don't you know it's it's not that big a deal but when I asked John Resig why is that he said it's a bit of a extra work but it's good to know if the suite ever stops running for some weird reason I think that's not a bad reason to have it especially if again you're managing an open source project and you do not know all the people who contribute to your code and you don't really trust that they will remember to write tests in a way that don't break and maybe an async way will not run or something so maybe in that way it makes some kind of sense to to implement this but need but but then you look at tests that have 78 78 asserts expected so saying okay this is getting a bit out of hand because can you imagine adding another assert to that test and then having to go to the beginning and then add and replacing it to 79 that's like working at a very very high bureaucracy company no this is the number of expected asserts and of course you know that people's are started running this they just counted how many asserts they think they had and when they expect said no no you have different number they just changed it to that number at the beginning and they just kept kept adding more one and two and three um then then you have 119 that's interesting this is just the beginning of the test of course and you start seeing it now we're seeing a different pattern what the reason we have 119 asserts in this test is because this isn't really a test this is a test suite disguised as a test method and in that test method we're testing all of the possibilities relating to jQuery dot when and then what happens is the developers are then encouraged because they see the other tests to then put new tests at the bottom of this method and then they change the expect so it's really a test suite disguised and then each assert is really a different test and then of course you cannot really give it a good name you just have to give really good error messages we'll talk about the error messages in a second then you have this right this poor guy he just wanted to write a simple test still has to say expect one now in terms of readability one of the things that are hard to read when I was looking at the jQuery tests I was looking in this test and I said well you're expecting the word blog from this thing to e2 to appear in this essay P thing etc but where where does that actually exist and then I had to start thinking where is that coming from and nowhere does it mention where the text is and of course once you realize it you understand that the test Runner the index dot HTML test Runner actually contains all the test data and there is a shared it's basically a shared resource there's an HTML page where all the tests are running on and then you have to go and you look at this page and you see oh there it is ok but then you realize that all the developers who write tests for this they each one of them probably adds a line to this page hopefully if they don't what they would do is that they would manipulate this in memory as well so imagine that you have six thousand tests all manipulating the same piece of HTML and they all have to coincide with each other they all have to make sure that the other tests have the same state etc and that's kind of also could be a maintainability problem here's another example somewhere up there in the beginning of a very long test suite there is a declaration of a variable and then somewhere down below there is the usage of that variable very very annoying to try to read and understand that test and then what happens is that again developers look at the the original broken windows and say okay all the variables are defined at the top because this did start out start out as a test method but then more and more things were added so someone at some point said oh you know what it's it's a hundred lines of code but all the VARs are in one place I'm not going to do a different thing obviously right so I'm gonna put the var in the same place and human psychology is the same way right you know the experiment you put everyone in an elevator to other people there are actors in you and you get one person to the elevator and everyone turns around to face not the door and then the person who's the innocent looks at everyone if he turns the other way as well why he doesn't know he does what everyone else does that's kind of the point is that here instead of aiming for the pit of success we're giving people a template of how to make the test more and more or less and less maintainable comments I'm not talking about the regular comments I'm talking about commenting out tests what happens when you have these things in your tests you can imagine the developer no one will ever delete these things ever in five years these comments will exist and the reason nobody knows why they were commented out nobody knows if it will be someday commented back in nobody knows what's the point must be important and so it will stay there forever not doing anything but just making everyone ask unnecessary questions so I also tell people when I teach unit testing is that imagine that the person who is going to read your code is a serial killer that knows where you live you don't want them to start asking unnecessary questions but in this case that serial killer is going to take and asks X come to your house and start asking hey what's that comment about really what about naming well naming is one of the things that I think people really miss the most in this case we have a test for text with an undefined and then at the end notice that they there is a message the last parameter of the equal is what to put out what to print out if the test fails and notice the message text undefined is chainable of course this is a bug face fix but then my question is is chainable the bug or is chainable the fix which one of them is the important one should it be changeable should it not okay so the way I like to name my test is that I have three pieces of information somewhere so that the reader can understand what's going on and I find that even if you leave out one of those things the reader is going to have questions and then they have to read the code of your test and in this case it's the unit of work or the entry point to the unit of work and then we have scenario which is under which conditions are we testing in this case the scenario is undefined and then the expected behavior and it should be in the name so in this case it would be text with undefined and then I would either write can be chained or should not be chained either one but I would write it at the name of the test not at the message of the equals or the asserts and in fact I would argue that you shouldn't be writing any messages for as the last parameter of the asserts because if the test name is good enough you wouldn't have to and when you see just the test name you don't see all those messages when the test failed you just see some of them and then you see stat trace but the test name is the first thing you see when you have failure and if the test name is good enough you might not need to see the code of your test before going to the production code and starting to fix it here is an example that I call the test of doom okay lights okay this is the test of doom right there and when I looked at this test at the beginning what I saw is this first of all 78 we talked about this but then remember this is a very very long test at some point you start understanding that this test is a big big suite of tests and then at some point you start seeing this you start seeing all this stuff in this long test and you start saying okay you start have to start reading it first and then time is wasted then you go all I understand this is a separate test and they're doing a try-catch because if they don't all the code below will stop working will never execute lights please okay so we have multiple tests in a single test and that's the most of readability nightmare and the maintainability nightmare and one way to fix that is by having some kind of I don't know separate tests for everything you test I think that's an important thing that we can do but it also leads to these things at the beginning when Q unit was just starting out people were abusing it and they were calling he unit reset in the beginning and in the middle and then after each and every thing that has to be reset for example the state but this is really just an anti-pattern whenever you see this what it the code is really telling you you really want different tests to be extracted from here by the way you notice there is a like an equal inside and equal that's got to be fun to debug okay let's look at backbone now backbone has is also using kind of the same ideas but here there are magic numbers what are the magic numbers well it's not one two and three but what the hell is - five remember the serial killer they look at this and say - five that looks interesting because it doesn't follow the pattern right and by the way at the top we have six decks the expected amount of assertions again but here our brain is designed to see patterns 1 2 3 5 - 5 what the hell if it was 4 I could understand it was - 4 I might have understood but why 5 why did they jump 1 this is a mystery that I still have not solved but just just wastes time from the reader so in terms of naming let's look at handlebars in this case handlebars is using jasmine and in this case Jasmine allows you to have nested blocks of things much like our spec from Ruby etc but this is an example of how you can have the structure but not have a fitting style for the structure so if we look at the names of the tests here we have described blocks and then it array it array okay and then what should happen is in the messages here arrays ignore the contents when empty arrays iterate over the contents when not empty well that's the wrong way to use the structure because now the person has to read your test code to understand what's supposed to be going on if you want your tests to start reading like documentation we can look at the tests in a different way here's how I would structure the same tests described blocks described array described then you would have a nested describe inside it that is non-empty and then an it inside that tells you what is the expected Saviour iterates over the contents and then just the describe blocks plus the eights give you everything that you need to know without looking at the code of the test the same with that is empty it ignores the content with index uses the index variable and all I did was basically take the error messages and from here and then down below that the the file and just use them in different places and that gave me a much easier way to understand what's going on same problem with knockout in knockout you have describe binding dependencies it and then look if the binding handler depends on unobservable invokes the neat handler wants and the update handler whenever a new value is etc etc but it's the if that bothers me here and again these are amazing frameworks but I think the test could be more readable when you have the if that's basically the scenario right you have the thing that you're calling the unit of work then we have the if which is the scenario and then the it also contains the expected behavior so it's combining these two pieces of information but then again maybe this would be a better way describe binding dependency describe when handler depends on observable a new value available it invokes the init handler it also makes sense in English you can actually read it as a sentence and I think the authors did mean to do this but I don't think that they knew about the possible structure that you can have and the nesting that you could do to make it even more clear now there is another common problem which is when you have set ups and you if you've ever used set ups this is Batman J s in Batman is one of the files looks like this there is a they're using CoffeeScript for some of the tests so the setup looks like this they have a spy in this case the spy is a mock object and somewhere down the file they start using it the problem is that the spy created the setup method is only used in some of the tests okay so we're checking it we're basically using it as a mock object in all these places right we're asserting against it but there are a couple of tests that don't use it so the test method becomes less readable because when you look at it you start saying oh all these things matter there's some of the matter - all the tests or not and then you have to start dissecting the code again so if you ever want to start refactoring these set of methods it's going to be very hard because you don't know which dependencies which tests depend on it so if you have to use a setup method make sure that you only put things in it that are applying to all the tests in the file but if it's only for some of the tests in the file I will just use a factoring method directly in the tests actually I don't like using setup methods at all I would just put the factory method most of the time and create the spy for each test I find that it's more readable and then the reader of the test doesn't have to look in multiple places to understand what's going on and while we're at it it's I don't think it's a great idea to test multiple things in the same test I think that it makes the test name very very generic so maybe in jQuery that's the pattern that has evolved but I don't think this is a good pattern for using every other framework as well I think that if you're if you if you have such a short test but you're testing look at the name redirecting using redirect in an action prevents implicit render I'm not sure what prevents implicit render means but if we look at the asserts we say that the battement current app subviews length is zero maybe that's the what prevents actually means but then there is I'm I'm making sure that the Navigator redirect was called now interesting questions why do I care that it was called is it just an internal implementation possible and then I don't need to assert on that I only need to assert on the end result which is this length is zero because then if I don't need this line then tomorrow I can remove all these as long as the end result is still zero my test will not break but if this is important if indeed this method that has two end results which is possible a method can have multiple end results but each end result should be tested separately and have a good name then I would have a different test that says that when you call this it calls navigator redirect for example but I would separate it because in this case if one of them fails I'm not sure if it's actually a problem or not and sometimes especially in the interesting world people are start on on mock objects just because they can but doesn't mean they should and when we look at all these things together we find that we have those three pillars of good unit tests they have to be readable they have to be maintainable they have to be trustworthy if you don't trust your tests you're not going to bother reading them if you don't if you can't read your tests it's going to be very hard to maintain them and if you can't maintain your tests you're never going to get them into any kind of a trustworthy States when they start failing thank you very much and if there are any questions we have a few minutes so when will the second edition be released some questions about my mother and I some people got the joke okay does the regulator have a future nobody knows what the regulator is except me in that guy okay the second dition should have been released this month should be released this month you say that only third party should be marked what about when testing against your repository I often find that I want to check that my repository was called in a specific way I fully understand if you do have a repository and your tests do have to use some kind of database then I would say don't do unit tests on it I would never create a mock object of a database and prove that some sequel query was actually called and the reason is because the repository also contains a piece of the logic you would like to test for example the structure the schema is an example of some of that logic and so if you do have some kind of repository I would use only integration tests to test the data layer against that repository and I would put that in a separate folder otherwise it wouldn't make sense the test would be very very brittle and it still wouldn't prove that anything worked you would have the perfect query passing in your tests but if you got the wrong schema your test your application will actually not work separating tests integration versus unit test do you use test categories or should it be separate projects I don't specifically use test categories I find that the physical separation is a better pit of success I have seen people use categories but I like the physical separation it gives me a it gives me a physical bucket to look at when I'm sitting with someone and we're pairing together we have a two projects and then I say okay which project to rewrite this testing what happens with categories that you end up deciding if you remember which category this should really be in and sometimes the whole point is that it's so important that I would rather it be a physical thing and moving things but I have seen people successfully use categories so I would say it's more of a personal preference than a specific problem with categories I just always made sense to me to move things and I don't have to worry about features of the framework or naming conventions of categories if that changes at any time if there are no other questions I would like to thank you you can only you can feel free to learn more about about all this stuff and get some more videos at our Devine testing comm and you can always contact me on Twitter I appreciate your time thank you very much you
Info
Channel: GOTO Conferences
Views: 33,990
Rating: 4.8697677 out of 5
Keywords: Java Script, Unit Testing, Software, Conference, Software Development, Great Talk, GOTO, Good, Presentation (Software Genre), Roy Osherove, common pitfalls, Videos for Developers, GOTOcon, GOTO Conference
Id: iP0Vl-vU3XM
Channel Id: undefined
Length: 46min 10sec (2770 seconds)
Published: Tue Apr 08 2014
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.