Please don't mock me - Justin Searls - JSConf US 2018

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] the title of this presentation is please don't mock me that's what my face looked like seven years ago my name is Justin I go by Searles my last name I'm most internet things if you'd like my contact info you may npm install' me i am the creator and the current maintainer of the world's second most popular javascript mocking library if you don't count jest or jasmine here's the download chart that right there is sign on the most popular one and that's us test double way down there this was a big day for us is the day we finally caught up but the npm rebooted all their servers so we're still fighting you could totally help goose our numbers by NPM installing testable today all of the illustrations that i do will be just musing testable by happenstance it's what I know best I'm here today to convince you that the popularity of a JavaScript mocking library doesn't matter and you should be saying well you're just saying that because your thing's not popular and you'd be right but additionally I'm here because I've been practicing test-driven development with mocks for a decade now and I really come to believe that literally nobody knows how to use mocks and of course that's a terrible thing to say I shouldn't say that I should say figuratively nobody knows and let's break it down because there's a group of people who can explain how to best mock things in any given situation and there's another group of people who always mock things out consistently unfortunately for all of us there's a much larger group of people who use mocking libraries without belonging to either previous group and so you know when we initially wrote test double J s it was really to like target that intersection of developers who already knew really how to like make the most of their tests and my goal is not just to become more popular to grow the intersection people who really understand and get the most out of their mocking library and them as a result the most out of their tests so that's what we're here to do today but to do that we have to define a few terms first whenever I say the word subject I'm referring to the thing being tested like a like an experimental test subject as where it gets that name whenever I say dependency like if you're testing a module anything that that thing import so relies upon that is external to it is a dependency and whenever I say unit test I'm not using a really like fancy definition or nuance term really like if you're invoking a function and you assert certain things come out the other end congratulations you have a unit test the catch-all term for all types of mocks is testing double whether you're talking about a stub or a spy or a mock or a fake but today I'm gonna like you know disregard all of that and just use mock because it's the most common vernacular coincidentally and sort of unfortunately from like a branding perspective I'm also from a company called test double we are not a mocking library manufacturer instead we're actually a consultancy and what we do is we pair up with teams maybe like yours who are looking for additional senior developer talent and we join you on contract to work alongside you get things done and hopefully with an eye to helping you make things better along the way you can learn more about us online this talk has four parts the first part is obvious abuses of mocking we're gonna move on to the less obvious abuses and then we're going to move right along to the questionable uses that might have some value but people often mess up and finally the one good use for mocking that I found in you know all this practice so first let's start with the obvious uses and that is using partial mocks so you like mocked out part of a thing but left part of the other stuff real and to illustrate let's say that you run a park ticket kiosk machine you know people tap on the screen and then you know they say hey I'm 12 years old and so we check our inventory module to ensure that the child tickets are available before we try to sell them if they'd said they were 13 we check for adult tickets and either way because we want to like upsell them during the checkout phase we make sure that we have Express passes available to sell the logic like might be implemented in code like this you know if they're under 13 and sure we have a child ticket otherwise ensure we have an adult ticket the Express module thing is turned on make sure we have those and so on and so forth a test for this how would that look well we'd create another test module over here and we did invoke the code just like we normally would but what about this insurer child ticket because it's like a Boyd function it doesn't return anything useful so like how would we assert on that well we could mock out that method on the inventory and then return back and verify that that call took place and a way to do that we just replace poke a hole in reality replace that method and then fit code path me again we just invoke it like normal and then you know testable and a lot of mocking libraries come with a way to like verify a particular invocation happen just how you like it for the other code path for the adult we'd write another test case we poke a second hole in reality and while we're there we can make sure we don't call that one during the child code bat that wouldn't make sense and then mostly just copy paste and then update the values we can run this test great test passes looks good fortunately time marches on and you didn't do anything but you get a phone call that the build is broken and your test is the one failing so you run the test again and sure enough it's failing in two places we're calling the adult code the adult inventory insure adult thing an extra time when we didn't expect to and that doesn't make any sense to us cuz we haven't changed this code we look at it nothing's different here so we look at the only thing that's actually external still a real method ensure Express pass that's still getting called so that means we have to go now load up this inventory module and we can see the maintainer of that module actually added an intrinsic call to ensure adult ticket maybe they don't want to sell Express passes if they're out of stock and that might make sense to them but like can we really blame them for breaking our tests like were they expecting this zombie half real half fake inventory module to be floating around somewhere probably not so we don't have really any recourse except to poke a third hole in reality and we can make the test work but something feels wrong it's almost like our ship is sinking and our solution is to poke more holes in it so in this case you know the test felt good initially because it was superficially simple it was terse and only said what we cared about but underlying that was that we didn't really have good experimental control from what we're trying to specify so instead what I'd recommend you do fake out the whole thing that way it's really clear what the contract line is and then require your thing and test as usual in short if you have a dependency and it's a totally real dependency that's really easy to maintain because we're used to invoking code that's real if you have code that's like a dependency that you've completely faked out whether what the tool or not at least you have total experimental control so the expectations are clear but if you have this like dependency that's like half real a half fake it's going to fail for surprising reasons and you're gonna have a bad time the second obvious abuse that I see of mocking is people partially mocking out the actual thing the subject the thing that they're testing this advice is short please don't fake out part of the thing that you're testing you know I get pushed back sometimes like oh well we have a very very large module it's got all these methods and they call each other and there's just no other way to get coverage of this little thing right here so we need to poke holes here and there and there to get our test running a problem with that is like now you have two problems you have a big thing that nobody understands and tests of it that nobody trusts so please don't do that third obvious thing that we see is when people replace some of a dependent of the dependencies under tests but not all of them for a given subject and so to explain I'd like to use a word over mocking if you've heard this term I think it's kind of like a prevailing ideology of how people think about mocking and tests as if it's an affordance to be moderated like you have this manometer that's like slowly building up as you mock things but watch out because you don't want to over mock you've crossed some invisible threshold so don't do that and it's a strange way to think about it so let's talk about it let's say that you write a system that handles airplane seat reservations and your great your pair programming together at work and so you think about the different dependencies that you need to implement this subject that's under test and then you talk about the different test cases that you need to implement and because you're pairing you try to normalize on approach and so you know person on the right says hey I like mocking out my dependency isn't the person on the left you know doesn't like mocking tries to avoid so you know you're pairing so what do you do you compromise and just mock out half the things that is not a laugh line that's just how people really do this so you might got half the things and as far as this ideology of mock commoners ago you're looking good you're 46 percent or whatever but time passes and you get another call because the build is failing and it's your fault and so you can see here you know request see our module is calling this seat map with a particular address and then it gives the seat object back and you can see it's like a string right well the problem is that they've updated that contract and now they expect it to be a - arised address and so our test blew up well the person on the right who prefers to isolate their dependency says hey this failure has nothing to do with the subject and you know she'd be right like you see the seat number is just kind of passed in like a baton we don't do any of the string manipulation stuff here and so she changes that real dependency to a fake dependency fixes the test now more time passes but this time worse thing production is broken so let's dig in you know here we have this thing that actually fires the book seat and for obvious reasons we've knocked that out and you can see if it takes these three arguments well the maintainer of that transposed the second and third argument for whatever reason and our test continued to pass it was a fantasy green test and as a result production blew up because the build was Green now the person on the Left who hates mocking would point out hey this wouldn't have happened if we hadn't knocked this thing out and he's right - so he goes into the test and replaces the fake thing with the real thing and gets you back to passing and I don't think this is what people meant by ping-pong pier programming but this kind of passive-aggressive back-and-forth is exactly what you get people don't know why they're using a particular tool for a given job instead and this is kind of maybe surprising to hear like a test that never fails is a bad test because it hasn't told you anything it's just consumed countless cycles of cloud cpu time in your CI server instead you know it's gonna fail there hopefully it'll fail so think at the time that you're writing it what should failure mean for this test and design that instead of like how you actually write tests for example if you're writing a unit test it's all wired up all those dependencies are real sure nothing is mocked but let's think about like when it fails and you know intuitively you can say well it'll fail whenever the subject or its dependencies logics changes that's great it encourages you to be mindful though of what I call redundant code coverage and that is if you have a module that's depended by 35 different things and you change that module you don't want to set yourself up for a situation where you have to now go update 36 different tests so be thinking about that now if we have isolated unit tests where all the things are faked out 100% mocks or our my comedy readings are beeping loudly or something but when it fails we have a clear definition of what failure means it means in the contract between the subject and its dependency has changed and so it should encourage us though to like be mindful that like we probably want to have another smoke test some integrated thing that just make sure that when everything is plugged together the app basically seems to work so that's important too but what do we what do we do in this case we're half the things are real and half the things are fake you know it looks good under the prevailing ideology of like you know mocking is moderation but when does it fail well it can fail for like multiple numerous nonsensical reasons and so it should encourage us do not write just like this so please don't do that and so instead of critiquing how much mocking you see in the codebase critique why people are using box at all like what's the broader strategy here and I guarantee you'll have more productive discussions the common thread between all three of these things is that I think that you know just we all tend to have this sensation that like realism is very important in testing because we want to make sure things are gonna work and they're not gonna explode but that can actually cause problems where we forget that what testing is really about is like setting up clear experimental control to get consistent results that tell us something that we need to know and that's important to all right so let's move on to like the less obvious abuses and that's I want to start with talking about mocking out third-party libraries so say that you depend on an awesome Lib it's an NPM package and there's references to it all over your codebase it's got a kind of funky API though first you call it and then you pass it a config file so we read that from disk that returns an instance of the library that we can then call an optimized method on and we get all these really cool interpreter hints that we can use to speed up our code problem with that is that song and dance is kind of weird and so it's really hard to mock awesome live and here's like how you might you know try to go about it you fake up the file system and then you fake out awesome Lib and then you create a an artificial instance to represent creating one then you fake out the file system to say hey if we pass the right path I'll give you something to represent the config and then when we get the config will return that instance and then when you call optimize on the instance we'll call back with some hints that's a lot of setup and so you'd be right to look at that and get really frustrated how much you know mocking logic you have at the top of your test and a lot of people complain about that but the root cause here has nothing to do with the mocking library the root cause is that that was hard to use code and that like mocking libraries job is to specify interactions you have with public API s and so what the mocking library is doing is it screaming at you that this is a bad design so instead we just like force it and so we like copy paste that set up across all of our tests and and you know JavaScript we march along very quickly new major update is announced to Awesome Lib and we're really excited that maybe they'll fix this for us but all they did was change the callbacks to promises and so now we have to go over here and J and just add then resolve so that these are not promise resolutions and we have to do it in 18 different places javascript marches on and nobody uses awesome lib anymore we find out that like the new hotness is mind blow and and everyone's pressuring us to go and switch as fast as possible and so now we're just angry because we have all this pain festering throughout our system what I found is that like if code is hard to mock and it's something that you own it's a module that's in your codebase and you can readily change it that that feedback is fantastic because the remedy has improved the code design but if it's a third party thing that you don't control what are you gonna do like send a pull request and ask them to change their public API adjust for you no it's it's not gonna work out so well it's it's an example of useless pain and how people go about testing is full of useless pain so be on the lookout because it's money out the window so if you care about testing you need to minimize the amount of drudgery or else people are gonna come to value at less and view it as a waste of time so instead of mocking out third-party things I write little wrappers for them instead an example of this one might here be here just as Questor it off to the periphery of my application this is the only time I'm ever going to require awesome live in my app and instead I'm going to expose this simple awesome callback API that I wish that it did and sweep under the rug all the other complexity that I have to have additionally doing this by habit gives you some like additional mental space to think hey maybe we should be caching this file a read that's kind of silly or you know we could translate the awesome Lib errors into something that's like more concordant with like how we write error handling for the rest of the application and you can just do all that there additionally all that test setup you know that gets a lot simpler right because now we're just faking out something that we understand that's simple to use and create so much extra room in each test to be thinking about like what am I actually trying to specify instead of getting distracted in all sorts of mocking related rabbit holes and you don't have to worry about writing unit tests of these things for the same reason you're not going to improve their design just trust the framework to work because otherwise this is money out the window then the next lesser of the obvious things that I see folks do is when they read a lot of code that tangles up logic with delegation let's explain so let's say that you own a fencing studio and you rent out swords and your boss says hey how many swords are we gonna have on Tuesday and so instead of like being like a normal person you write it function that takes in a date you call a network requests to fetch out all the rentals that are currently outstanding you get those back and then you filter those down first by calculating the duration of that rental and then figuring out when did the rental start so that I can figure out what's going to be do then plucking the swords out of it now you also have an inventory the stuff that's not currently being rented so you call that - you get the inventory back you can catenate the - and then those are the swords that you're gonna get back so you can report success how might you test this well you start by creating a fake rental object properly that has the right properties on it you stub that out so that fetch rentals calls back with it when it's invoked you step out the inventory as well maybe with a second sword and then with a very particular date argument you call the subject under test and you assert that like you get the two swords that you expect and this works and you have some other test cases - so it's a little complicated time passes though it's working fine and somebody on the team points up hey you know we can speed this code path up now because we have a primed synchronous cache that you can call you can go into the code and you can eliminate one of these network requests so you just call that instead I've done this and now that looks good the code is faster but of course what it broke the test so we're we're frustrated by that if you look here yes sure enough we are we are mocking out this other thing for that part of the behavior it's not going to do what we want the test is gonna break and somebody on the team might point out like oh well the test is coupled to the implementation this is very bad this is a common criticism against mocking but I don't think it's necessarily bad I think it's a little bit of naive criticism because like we write a test and then we assume that it's going to save us in 800 different types of change that the system might undergo but like at most it can help us from one or two so when you're writing tests think about like you know this test is going to make me safe for a certain kind of change maybe but design that upfront so for example if you have your writing test or something it doesn't have any dependencies it's probably a pure function specifying logic and that means that if the rules to that logic change the tests need to change if you're just refactoring from a for loop to a for each or something you're probably going to be fine but like if you're adding a grace period to these rental durations the rules of the game change so you have to update all of your code examples it is coupled to the implementation in that way so if you have like another test though and you're mocking all dependencies of the subject what it does is it specifies those relationships and so when the contracts change the test needs to change the previous example like if the duration has a grace period one of its dependencies that's internal to that that's that's arm's length we don't have to worry about that this test should keep passing but if we literally change what we depend on to do our job of course that just is going to change gonna break and you need to fix it where this really flies off the rails is when people are right subjects that have you know both the mocking out dependencies and also implement some amount of logic it gets really painful so to illustrate our own example does this right we have this current inventory we're calling this fetch rental service that we're calling but then we kind of like a joint on to that this responsibility to like calculate windswords or do and you can see it right here it screams off the page if you know what to look for you know I've got network requests here and I got all this logic and it's same thing in the test I've got the network requests here and then all this logic floating around and what this is what we've called it like if we're given a design critique of this module we'd say that this is mixed levels of abstraction because two of the things that we're doing are happening with like business domain objects that speak in our language like like like peers throughout the app but then we're also dealing with all this primitive gunk we're implementing logic and multiplying primitive you know values like integers and such and that's a classic example of a code smell and the way that you'd remediated that code smell is by spinning off that third thing so that sword stock now has a single responsibility its job is to break up the work into these three other parts and let them do the real heavy lifting and and that thing that we just spun off is now a lot easier to maintain because it just takes in a couple arguments spits out an outcome it's a pure function really easy to test really easy to maintain so if a test is specifying pure logic there's no easier test to write inputs and outputs there's there's nothing else to deal with if it tests specifies relationships and only relationships those are completely under your control they tend to not change unless the contract changes but if it specifies both you're gonna have really really long test cases that change for all sorts of different reasons it's more useless pain so if you need a sticky note to like put on your team wall or something I encourage everyone to write functions that either do something or delegate to someone but both all right the last lesser of the obvious things that I see people do is when they mock dependencies like halfway down the call stack so what I mean by that let's take that we have writing a system to handle travel expenses so we have our invoicing app and then we're communicating via HTTP to get some JSON back from our expense system and so we have a method send invoice which calls build invoice which calls to filter the approved ones should groups in buy purchase order and then loads the expenses and then that breaks out to HTTP to like load them from the expense system and then all the way back up the chain with this intermediary D or data models for those actors so you might ask hey I'm writing a test to this send invoice thing which layer do I mock it it's a good question so what my answer depends on what kind of test you're writing so if you're writing an isolated unit test as you develop it I would always say depend not out the direct dependency because you're gonna get designed feedback what it's like to invoke that thing and the test data is going to be completely minimal so the contract between those two things is really clear if you're trying to get regression safety just make sure that things work I would either mock out nothing or say mock out only like external system so that you're using the system like a real user would benefits here are that like your test data is going to be representative of real life they're probably gonna like look like HTTP fixtures and additionally if it breaks you know what that means it doesn't work anymore but I see a lot of teams reach for the convenience of just mocking it whatever arbitrary depth comes to mind and this is kind of the worst of both worlds because when it fails it doesn't mean anything other than somebody might have changed something and now you have to update this test or like the the data that you're loading the fixtures are all kind of coupled to that thing and so your data could fall out of sync at any moment and yet this is probably like 80% of the way that I see people knocking out data providers and their JavaScript apps so to summarize if you mock out a direct dependency the failure is meaningful because it means the contract had changed if you mock out an external system again it's a meaningful failure it means your thing is broken but if you mock out this intermediate layer it probably just means that you have to go and do the chore of like updating that test now a probably it's going to like your eyes will glaze over when the build fails cuz you know you just have to go update stuff the common thread between all of these is that when we reach for mocks is just a tool of convenience they undercut our overall test strategies return on investment and tends to make mocks a four-letter word on a lot of teams I said that just realizing Mach was already a four-letter word but you get my point let's move on to the questionable uses of mocking first one I see a lot is when people write mocked out tests for existing code bases let's say that you work at an internet of things doorbell company and your startups you got a lot of spaghetti code but you just got a lot more funding so you can finally write all those unit tests that you never did and you've been really ashamed of the fact that you at 0% code coverage in your system and so you think well what's the first test that I could write let's write something real simple and integral to like what our system does that when you ring the door the ding count goes up and so we require a doorbell and then we require the thing that rings the doorbell we create a new doorbell we passes the subject and make sure that the ding count has been incremented we run this test and oh well we got the stupid error because you got to have a door for a doorbell no problem I'm gonna import the door it's actually the door passes to the doorbell run my test except for the fact that a door requires a house it's fine you're gonna import the house instantiate a passing the door which passes the doorbell oh just passing the subject run the test and a house requires a paid subscription network connection to this other service and you know what now you're just upset and frustrated because like the thing about untested code is very often lots of code paths especially when we're creating values are only ever invoked in one place and so this kind of crapped you can just like accrue one of the best things about testing is all your code gets like lots of exercise in lots of different contexts so you're incentivized to make value objects that are cheap and easy to construct so the right thing to do here is like make your value objects easier to construct but of course what they do instead is they knock all that stuff out replace it with a fake doorbell set add in count on it and then get to get to passing but this is not the right reaction I like if I had to summarize that into like a motto I say like mock out independencies mock out things that have like application logic that do the work but pass in real values that type information should be valuable it should be easy to construct them and besides tests like this aren't going to get you they're not gonna move the needle very much in this person situation instead what you probably want with an untested system is more safety to aggressively refactor it in which case I recommend you to the distance so think of ring Bell in the context of its application maybe there's an HTTP router in front of that and then we could write a test that runs in a separate process that interrogate that thing just the way a real user would so we can send a post request to like exercise the behavior that we want to test you know assume that maybe writes to a data store somewhere and then after we've done that we can run another request and see whether or not they'd had the impact that we want it or we maybe could look directly at the database and this is gonna provide way more safety to like actually start changing and improving the design of that ring bell subject now integrated tests they are indeed slower and they fail for more nonsensical reasons but they do provide more refactor safety and if you knew you're in this situation they're going to be a way more bang for your buck from a code coverage perspective and actually providing some sort of sense that your build means something the next questionable use I see people do a lot is enabling highly layered designs by leveraging mocks you know something I've seen over the years is that testing tends to push us to make smaller objects so instead of like one gigantic horse sized duck module we're writing like a hundred duck-sized horse so instead of having a gigantic order j/s method with like 800 methods all over it we might have like a bunch of carefully named different responsibilities in our app in separate files and so forth now I can talk all day about like why big modules are bad but it's not the case that smaller modules are better just because they're smaller in fact if you're like every time you pick up a new feature and you go to implement it you're just kind of creating these six cookie cutter objects over and over and over again you're not actually like improving the design that's just large objects with extra steps like you're still you're adding a bunch of new direction you're adding a bunch of files you're not really like improving things so you might ask hey what is this like unsolicited design feedback have to do with mocking well I'll tell ya if we were testing all this stuff and we made a change to this like a bottom layer that's really heavily dependent on module of course we would have to update that test but normally it would also fail all the tests of the things and depend on that thing and would disincentivize from a layering like this but mocking comes in a lot of teams will be like alright cool i can just fake out all the layers beneath me and so that means that i can create these arbitrarily injury stacked up applications with with the test not no longer providing me that feedback so mocks actually add a blind spot to teams in this situation and I would caution you to that like layering is not the same thing as abstraction you're not doing domain modeling if all you're doing is adding more and more layers of indirection so of course I love small modules I love small things make small things but make sure that they're meaningful that they sell of a distinct purpose the name means something and that you're not just like creating a bunch of files for no reason so what I'd rather see is an application that only has like one controller but like it knows who to call at the right time so you know maybe there is a responsibility to create an order and then I just use that it's like an escape hatch like a main method for the create order thing so that it can only focus on what's special about creating orders and doesn't get sucked into all this sort of like carrying the water that is necessary for every single feature if you're just kind of copy and paste the same stack over and over again so if you're in this situation and you're finding that like you're isolated unit tests aren't providing you with like useful design feedback they're not providing you value don't do it anymore in fact maybe question whether or not like your architecture is too repetitive and redundant and could be improved the last questionable thing I see is folks who rely too heavily on call verification where you know a mocking library lets you verify that a call happened a lot of people will get really excited about that and I'm a big believer like in general terms that we come to value whatever it is we measure and an assertion is a measurement and so our assertions tend to steer the design of our systems so let's say that we run a petting zoo and you were given a copy of test double J s and your birthday so you're really excited to go mock things out and you know you're what you're excited about is like you love petting the pets but now you can finally assert when that happens so you know you put the sheep one time you pet the llama you can even say hey I put the Sheep two times you don't pet the crocodile you can ascertain that - sorry crocodile we could write a test for this right you know like when we passed in this like function a kid and the sheep it'll return true as well as for the llama and false for the crocodile we invoked that subject with those three animals and then we verify that the sheep and the llama got pet but the crocodile didn't and to implement something that passes this test we import for those two things and then we pass in the kid and the animals for each of those animals it likes the animal that pets the animal now we got to a passing test great there's one problem though which is like kids have dirty hams and the pet function doesn't say how dirty and we gotta clean the animals in this petting zoo and so you just have to guess and so you set up a cron job to hose down the llamas every night at 10:00 p.m. which is a tremendous waste of water so we want to do better what happened here is actually an example of like where tools can mislead us because tools are awesome because they save us time by reducing the necessary thoughts and actions we need to take in order to get a job done but if we're not careful sometimes our tools can actually eliminate useful thoughts and and that happened in this case now just because you're able to verify calls with your mocking library you have to realize that it might encourage you to write more impure functions and impure functions those that have side effects are harder to maintain than pure functions those that return values because nowhere when I was writing this code I think to ask hey what value should pet returned because I had a convenient way to assert that it was just called and if I had to think about that I'd say well you know the pet function should probably take an animal and then return a dirtier version of that animal I guess so let's pretend we did that and here we're gonna stub out that when we pet the sheep we get a dirty sheep when you pet the llama we get a dirty llama now we care about this result and we're gonna assert that we get a dirty sheep a dirty llama on a clean crocodile so other parts of our system know which animals to clean at the end of the day we update the implementation like this first we're gonna take away this for each whenever you see a for each anywhere any kind of loop you know that like your application is just screaming side effects because it doesn't return anything so we're going to change that to a map instead and trade one array for another and return the value of that pet call and the parse of the kid doesn't like the animal will just return the animal as it was and this pad passes the test so now we know to wash these first two and we can spare the third there's just one last thing here is if you actually run this test you're you know really big warning up at the top because I'm gonna tell you that you stubbed and verified exactly the same call and that that was probably redundant and what does that mean well if you look at this test we're now like verifying explicitly every single thing that happens but we no longer need to be because we're like also stubbing these things and asserting on the values come back and so it's almost provably redundant now we can just remove that stuff simplified the test and so in general after years and years and years of writing and using mocking libraries I've come to view verification as just the assertion of last resort only if it really makes sense for the thing I'm calling to not return a value or if I'm it's out of my hands out of my control do I actually write assertions using verification the common thread between all three of these things is we forget that like mocking libraries were invented to provide rich design feedback to improve the design the simplicity of our systems and if we ignore that then we're just introducing yet more useless pain all right so that was like a lot of things to not do the one thing that I really like and let's talk about that so I was really inspired by this book called growing object-oriented software guided by tests by Steve Freeman and NAT Price years ago and I've iterated on it enough that Matt has kindly asked me to call what I do something else so I call a discovery testing we're gonna do a quick demo so suppose that you write conference talks but it takes too long there's just too much emoji and fiddling and stuff so you decide let's meet it and so you take something that loads up all your notes you run it through a sentiment I'll sentiment analyzer to find what emoji should match up and then you generate a keynote file you know if we start a test like this and we're practicing discovery testing I always start with the setup at the very top layer and I ask myself what's the code that I wish I had because I'm lazy and I'm also relatively incompetent and like when I'm looking at a blank code listing I just panic and assume that I'm too dumb to figure it out but if I could just say hey I wish I had something that loads notes and something that pairs those notes up with emoji to create slides and then finally something to create the keynote file for me that I can understand and then you know I load up my subject and I can start thinking about the test well of course the test should be it creates the keynote from the notes I run this test and it fails because I'm talking about code that doesn't exist so it says hey this module isn't real so I just touch it and I run it again and then this other one the second pair emoji isn't real touch it run again and then create file isn't realized cetera et cetera and I got that too passing and what I love about that is that it gives me small incremental forward progress throughout the day even if all I accomplished just then was like the plumbing of my application it feels like paint by number and you know programming is an area of just professional life where we I really wish that things could we'd get more feedback through the day like hey good job bad job and sometimes people go days weeks months without any real feedback that they're on the right path so so testing this way provide like soothes my anxiety and now you know I've broken this work up to at least three responsibilities here so I can start writing the actual tests now I assume I'm gonna need something to represent notes so I create like an ode note domain module and when I stub you know for given a search string you know I should call back with those notes I assume I'm gonna need like a slide value as well and so when I call this pair emoji thing with the notes it just should return these slides synchronously I call my subject with that search string and the given path that I want to ready to file - and then I verify that create file was called with those slides at that path run my test again and here's the message I've been working for it's all this time right I expected this to be called with these slides at that path I can make this test pass by importing the three you know still empty modules we're gonna fake that out at run time passing the the topic to load notes get the callback callback create the slides and then create the file with those slides in that given path I run this test and now it passes now you'd be right to ask like that was a lot of words and minutes for five lines of code like how is this actually a productive use of your time and the answer is I've actually done a lot of work here just by thinking through all these things I just like agreed to like hey this is our public API of these two strings I know that when I pass a topic into load notes I get notes back I know pair emoji trades notes for slides and I know that we passed slides and path to some other responsibilities the subject at this point the entry point is done I'm not kidding when I say often never look at that top level again I've successfully broken it up into three problems the work is broken down into three things that we know the contract so you don't have to worry about the fiddly bits giving all tangled together at some point they have different jobs that we know in advance and we already proven out the contract between those things so we know what to call them with because there's already something calling them and that's great so additionally you know we we sussed out a couple of value types that we want to be passing through these methods you know we have some notes and and the next step is like it's a tree right it's just recursion so we ask like okay so how do you what do we want load notes to do well it's got to read from some note file parse our outline and then flatten those points so that we're having a linear presentation and the first is IO and the second and third are just pure functions and the writing test for pure functions like we talked about it's really easy so we want to maximize those the pear emoji thing maybe we tokenize those notes and then we run that off to a sentiment analyzer and then we convert them to slides and the first things are pure function the second thing is a good probably gonna be a wrapper because I don't want to figure out how to do sentiment analysis oh not another pure function this last bit you know we want to build some kind of layout from those slides like text goes on top emoji down here or whatever then we need to generate Apple script commands so we can actually know like have a big array of like everything we need to automate with keynote and then finally something who's responsible for rifling through all of those commands and and generating the file for us the first and second again pure functions real easy and the last one would just like shell out to this thing called OS a script that's gonna like open up Apple script in automate keynote what I love about this process is it gives me reliable incremental progress throughout my day what shakes out the other end is a bunch of single responsibility units with intention revealing names the the organization of all these small things is actually discoverable like if I create a new directory every time i recurse well if you're reading my code you only have to dive as deep as you seem to be interested and to answer the question you're looking for it separates out all of the values from the logic by default so it doesn't tangle those up and I spend most of my time writing really easy like synchronous pure functions which are the easiest kind of code to maintain that's all I got so I thank you for being patient really grateful that you showed up at a talk late in the day you know like I mentioned testable we're a consultancy if your team is looking for you know experienced developers maybe you're only thinking about hiring but you might be open to talking to us instead about how we might be able to help out and integrate with your team I'd love to meet you check out our website we're also hiring so if you'd like to work remotely you kind of share our passion for improving how the world write software and you want to share your experience with others check out our join page video of this talk I've got a previous one from a cert J s in February I've already I buffered so it's gonna be tweeting the video link momentarily to my Twitter accounts URLs would also love to see your feedback through our website forum also I've got brand new testable stickers are a new alternate logo I'm gonna be here all week and if you ask for a sticker I'd be happy to give you one there's just one last thing I want to share before we go which is my wonderful wife Becky down in front she put in a lot of work this morning I took a lift out to the store and she she just for your all's benefit I'm not the generous type but she is but I am like the head of marketing for testable so I saw an opportunity and she purchased like every last bug spray at the target nearby so Thank You Becky and then I affixed branded stickers on to each of them so here's the deal because there's not enough of these for everybody if you come and meet us at dinner and you talk a little bit and you say just enough to demonstrate that you actually showed up to a 5:45 p.m. talk we will happily give you a bug spray until we run out of them that's our little gift to you thank you for your patience I had a really good time speaking here today [Music] you
Info
Channel: JSConf
Views: 3,255
Rating: 4.9349594 out of 5
Keywords: jsconfus2018, tracka
Id: x8sKpJwq6lY
Channel Id: undefined
Length: 39min 2sec (2342 seconds)
Published: Thu Nov 15 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.