GopherCon 2017: Mitchell Hashimoto - Advanced Testing with Go

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and thanks for being here is a lot more people than go for condom 2014 like an order of magnitude feel like in this room today I want to talk about advanced testing with go I got a good introduction so I could skip this slide with my face on it I founded a company called Hoshi Corp Thank You Ashley for this really adorable graphic my fiance just is in love with this thing but we love go at Hoshi Corp a majority of our lines of code and projects kind of like by any metric is written and go and it was mentioned that we have these are the open-source projects we have that are written in go or no these are the open-source projects we have that are written and go but this doesn't cover all the enterprise work we've done close source sort of libraries we use for those enterprise products on a lot more written ago we write go all the time and something that makes us a little bit unique so goes in our primary language for five years it was definitely a bet five years ago and it's paid off it's been great our projects have a bunch of properties that make testing rather interesting so there's the scale aspect which a lot of people have our projects are deployed by millions of units whatever that unit ends up being there's millions of users for sure at this point we're sort of hitting the millions of servers as well and then there's it's deployed significantly in enterprises which has a different expectation of software in a variety of ways so there's that aspect we build a bunch of different categories of software now that differ product by product which affects our approach to testing in our viewpoint on testing so we have a number of distributed systems console surf Nomad these could be viewed as sore at their core distributed system we have certain software that has performance that matters a lot in certain metrics and so console read performance matters quite a bit Nomad scheduling performance matters a lot we have tools like vault vault as a security tool and so it has a high you know it almost have to be sort of perfect degree of security we need to maintain and testing has to come into that in addition to other things and then at the end there's those correctness of when things don't work correctly it could be very detrimental so almost ops where as bugs we ship bugs everyone ships bugs I'm not saying we should perfect code we don't but things like terraform we need to make sure that when we ship a terraform update someone doesn't run a change and they're all infrastructure disappears or someone updates console and they lose all their data or someone updates nomads and we decide to de schedule all their services there's catastrophic things on the correctness scale that we need to definitely prevent and then coming back from the catastrophic things is just more details of correctness we want to ensure so the way stock works is that there's there's really two parts to testing and you need both of them to to produce to produce good testing and so there's test methodologies which is writing the tests themselves and how to write the tests and then there's writing testable code which I think is equally important you can't just take any kind of code and write great tests for it and so this is also the slide style to make it really simple so the slides that are in black background are going to be about test methodologies and the slides that are in a white background are going to be about writing testable code so to explain that in a little bit more detail so test methodologies starting with a slide styling right away or methods to test specific types of cases that you see when you're when you're right test their techniques to write better test better is defined and a bunch of different ways throughout the talk and it it's trying to explain that there's a lot more to testing as we all probably know then assert something and then there's writing testable code which like I said is how to write tests how to write code that can be tested well and easily and this is just important it's very common both from junior to senior engineers I see it all the time to hear things here people say this just can't be tested well so I didn't write a test but I ran through it or something and they might not be wrong like they I might look at a code and say like you're not wrong this can't be tested in any reasonable way but by refactoring in a certain way by r ER connecting in a certain way you could usually get to a point where at least 90% of that functionality is tested and maybe the 10% that's very hard to test that remains out there and so at hofstra corp we I don't think we've ever seen anything that can't be tested very well and we hit a pretty broad spectrum and rewriting existing code could be a pain but it's usually for testability it's worth it okay so from here on out we're just going to dive right into it and it's just going to be I mean it's just going to be like a machine gun of about like 30 I think 30 different methods and how to write testable codes in total and so we're just going to get going oh the last thing in the slide format is they are in order roughly from things I expect everyone in this room to know to getting more and more esoteric and weird so at the be don't be discouraged at the beginning if you're if you think is just a really beginner talk I'm going to try to ramp it up for you so I won't promise that everyone will take at least one thing away but I hope that everyone will take at least one thing like so starting right at the beginning with with simple stuff I think and skipping how to write as a single go test I assume everyone knows how to write one go test with skipping that is sub test sub tests are new in go 1.8 officially and they look like this so they let you write a test and they let you mess sort of by calling run and nets let you nest subtest within a test and that's good I didn't do the output but if you run the test you can target those sub tests when you look at the output it lists all the sub tests and there's a bunch of benefits to these things so for example your sub tests are a closure so now defers work within those you could use if you have a huge set of test cases and you're opening files or making network connections or something you could actually run defer in these sub tests rather than accumulating them or like in the past before this was an official thing I would actually just make an anonymous function that I called right away just to get the defer functionality as part of it and you can nest them infinitely so it's built in to go you're allowed to target sub tests and you could just continue to nest them further if necessary and it's hard to explain the value of sub tests without talking about table-driven test since I think this is the nine out of ten use case for sub test that I've found so table driven tests look a little bit like this and I've used the sub test syntax here too there are ways to build a table of data and a table of test cases within a single test and run through them so in this case we're testing Edition just as an example and so there's a bunch of cases up here where we specify the operands for a and B and then the expected value by adding them together and then you could loop over them run the sub test and make it work and there's a few things I'm showing here so one naming the sub test so in this case I name it by just with the value like what what's actually being tested that's sometimes useful and I'll show you in a little bit another option and then using the actual subsets itself and so we use table driven tests a lot we use it a lot of Hasek or I like to almost default in a lot of cases the table Fuhrman test even if I have a single case I like to just set it up the structure because if I look at something and I think yeah there's probably a scenario where we want to test other parameters here one day in the future I'll just set it up from the beginning and with a table so I like to do that it's low overheads add a new test which is the best case when you're fixing a bug or you find a bug creating a regression test and adding the case is super super easy for the developer it makes testing exhaustive scenarios both simple technically but also visually simple like depending on what you're testing with the table like it's easy to see visually if you've existed exhaustively tested sort of all the edges of your function let's see yeah we do the Saturn Malan so I recommend just doing this wherever it's a little bit of overhead but a lot of value I think in the long run the other thing I would say is consider naming the cases so here's another example where we just put a name in the test case in the table things and then when we run the sub tests we actually use the name there we used to this is like a long four years ago when we did table tests we used to just generate all the the sub test names at a time they weren't real sub tests which is data but as you get more complex table driven tests that starts becoming really unclear and if you just do like a loop with indexes the indexes are unclear like some people will just do the name as the index of of the this life and when it's like test failure and our test index number 314 failed it's really hard to find 314 there's definitely been times like in terraform where we've just gone through a file and in like 0 1 2 3 4 and that's usually when we start adding the names to things so that was a mistake we made early on and I want to add as a disclaimer like we I'm not trying to claim at any point during this this talk that any of these things I say are novel a lot like table-driven tests the first place I saw it was in the go standard library a lot of these things are going to go standard library seen them maybe in other projects but I just I don't remember anymore so it's just a list ok keep going test fixtures so test pictures so you could do this you could access data using test fixtures and you can notice here that I'm just accessing data relative to the current working directory and a test fixtures folder you can name it anything you want so little-known thing for a lot of beginners and NGO is that go tests will always set the current working directory as the package being tested so when you do go test period slash period period period and you're testing all your packages each time it goes into a new package it sets the current working directory to the package being tested and this is really helpful because it lets you use relative paths to access data if you need to for your test at Corp we use the name test fixtures for no specific reason as the directory will restore our test data but it's very useful for things like loading example configurations we test most of our software against real example configuration such as writing files as you would if you're running it model data for web applications actual pictures they're now binary data super useful we use as a test you know certain like nomads how it handles tar.gz or docker images and things like that we we use test pictures for this so keep your own golden files so this is something that's definitely I saw in the in the standard library I remember that is in golden files of what they called it I think in the in the standard library which is what I call it this here golden files are a way to compare complex test output to a file that has the expected result so the place in the standard Lib where I first saw this I'm not sure if it's the only place that exists is the test go pumped when when they test go pump they run go fump and then they compare the resulting bytes to a golden files content and they put this flag which is really interesting if you put if you put a global flag in your test it actually becomes available on go test and so they put a flag update that when you use the flag update it actually updates all the golden level so your tests will always pass when you use the update flag because all the files will get uploaded updated but you could use that updated golden files and it's really a much better way to compare lots of bytes than just putting the bytes in a constant or something in the footer of the file or in the test itself and so just run it I'll just run through this real quick since it is a lot of code but it's actually usually the pattern that golden files take more or less you usually have a table with golden file so it's usually part of the thing it's either a table or sort of a generated table from a directory or something you go through you do something to get the actual bytes that's the actual assignment there you load the golden file by usually the name you will the golden file if the update flag was specified you update the golden file and and don't judge me I'm ignoring errors here because we have limited space I would test there's usually then you read the golden file and you just do a bytes equal check and if it fails you come up with some way to show a disk of that so like the funk test for example actually have a really nice if function to show the diff in a way that isn't just here's bytes one and here's bytes two and they just don't match just find the difference but depending what I'm testing sometimes that will be what I do or I'll actually do the disk and then when you run go test this is what it looks like you could just introduce flag as you want and I guess I don't have another section that explains that but you could introduce flags for the go test command and we use that for things other than golden files we use that in some tests I'm trying to think to disable certain types of very side effect you tests are very expensive a very slow test you could just introduce flags and I think yeah I think I covered all this I have a lot of these bullet points to make it more friendly for when I upload this slide ok so we did a lot of test methodology and now we're going to see our first how to write testable code and there's going to be a lot more from here on out so global states I think this is pretty obvious but avoid it as much as possible there's a lot of reasons to do this of course but in the context of test it's really important to avoid it because it makes your test depend you know change depending potentially change depending on the order there ran it makes it difficult to reason about you know the full sort of inputs that are necessary to affect the behavior of some things so instead of trying to incite of using global states we try to make whatever's global a configuration option maybe we set up a constant that's the default value in the global scope but we still always try to make a configurable thing because you usually want to modify it or twiddle it for tests so here's some examples of you know not good better and best it's actually best so this is a weird Ana panel you should it you probably shouldn't be using global State at all like making a globally modifiable variable seems worse than making a constant the reason it's usually better is because constants are they really limit your testability and so making it a variable at least you could change some behavior in a test but the best is actually making the top level of constant and then making some sort of way to modify that in another way okay test helpers so here's an example of a test helper that creates a temporary file so there's a few properties of this tough helper that I'll talk about so one the test helps test helpers should never return an error you know functions and ghosts should always return errors test helpers should never ever return an error they have asked they should have access to the t structure so just fail the test if there's an error the other thing is sort of by not returning the error you make the test usage a lot clearer if if your test helpers return an error your tests that use the helpers turn into run tests helper if error not equal note you know fatal over and over and over whereas when they just fail on their own you could get a really dense block of test help or calls you know right at the beginning to do a bunch of setup and it's just a lot it removes a lot of I would say like visual overhead when you're trying to figure out what a test does and you should be careful about when you make these sets helpers I guess I get to that so I'll talk about it later the other thing I'll point out is I use a feature that's in go 1.9 TDOT helper so that just just was introduced in 1.9 but by calling peta helper in a Test helper it makes the air out for the stack trace output better when some you know panic situation happens when the test helper test helpers have been notorious in the past for panicking and the failures there but sometimes hard to find the exact test where that failed so if you're moving on to go 1.9 I recommend dropping this and all your test helpers and so I do I explained all this stuff so I'm going to give it the other neat trick is if you have cleanup to do you could return a closure and we usually just return in a funk like no return value closure to actually do the cleanup so in this example here's the temp file whereas cleanup we actually want to delete the temp file and so what we do is we do all the setup we air if we have to and what we do is return the temp file return a closure that deletes it and the benefit of this is that closure since it is a closure has access bill to that T key value so it could check the air like in in the production code where we use this we check the error of OS remove OS removed can potentially fail and if it fails we do a t-top fatal left and you don't have to pass that back because we we already closed over it and so when you use it it ends up looking like this sometimes you get really clever and it's not always beneficial for readability but sometimes you could get really clever if you're setting up some side effects some some world states that actually has no return value of just returning the function alone you could one-line the whole thing and so this is a more or less I mean I think it's a little bit longer or actual and because we check all the errors but there's more or less a test belt where we have in all our projects where we could change the directory to test things and and usually we're changing directories to test CLI behavior but you could one-line it the negative aspect of this is a new developer coming into the project looking at that line not super obvious like that what that's doing that's a downside that the argument we sometimes make is that we we really only is that I think force nature and it's we think this particular case is obviously know so I said all this most important thing is you have used the testing T just use it in your test helpers don't return errors in your steps helpers okay this is a I guess I should also say that some of these I expect people will disagree with so this is one of those that that I think multiple times that Hacha Corp we've hired very experienced go engineers and this is something that has sometimes rubbed them the wrong way but it makes over time we found everyone kind of likes us more so repeat yourself in tests what what we prefer what I prefer overall and test is localized logic as much as possible when a test fails I usually I or someone else usually wrote that that's a long time ago and it's failing because I'm doing a refactor I'm doing a new feature that caused it to fail and I don't really remember the details of the test anymore and there's nothing more frustrating to me than going to a test and realizing it calls seven different functions that are in four different files and I have to like start building all this mental context of what's happening why is it doing that that sort of thing it's much easier actually I think to just have a huge test that's like two hundred three hundred lines of code that just does everything right there so that I could just go through it and understand exactly what that test is the project where we do this the most is terraform if you go in terraform there's files called context underscore in any of the tests in there the each test is around two hundred lines of code and when we write a new test what we do is we take an old test we copy it we paste it and then we start modifying the five lines we need to modify and it feels bad sometimes but it's been you know four years and we get a lot of test failures and contexts because it's sort of the core of terraform and it's super super helpful to be able to open one you know vim panel look and have everything there that you need to figure out why the test is failing so copy and paste for tests we don't we don't practice repeat yourself too much in actual non test code but in tests we do prefer 200 line test to a 20 line test with abstract developers packages and functions and I saw I saw on Twitter some slides about antipatterns who talked about some of this and I for test I agree with some of them and then some other ones it's questionable so you know break this is hard because it's going to be you get this with experience with go you can't there's no real hard and fast rules you come in to go and know exactly when to make a package and when not to and so what you want to be able to do is break down functionality into packages and functions when it makes sense the one that makes sense is super difficult to explain and you also don't want to overdo it which is super hard to explain but doing this correctly AIDS testing quite a bit because it gives you a much cleaner surface area of what to test you know you know there's certain safety mechanisms of package boundaries amun exported versus exported that you don't need to worry about and it makes sense a lot easier so a good example here is chair form has a dag package for doing a directed acyclic graph and that dag package was written on a 2013 or 2012 and it's been it's been touched like three times in four years because because writing writing a simple like graphs library doesn't usually require change so we write the test there we know the coverage of that package is 100 percent at all times and we know the bugs likely aren't there but it's it's there's some issues with its like package of vacation so oh I actually am bringing this in here it's the next slide but unless the function is extremely complex we usually try to only test exported functions we've you sort of the exported API is a place where we need to do the test but for internal helpers that are that are really complicated or fan-out quite a bit or have a lot of edge cases we will those internal functions some people do take this too far as the way to take this too far is that the that you only do like sort of integration or acceptance test levels like if I just test the black box behavior then I know it works I personally believe that's a little bit too far so we try to bount we do accept some steps as well so we try to balance out the acceptance of usage with testing the export API with which sort of educated you know choices of testing internal API is as well yeah we usually treat the on export it's sort of function of structs as implementation details and by not testing those specifically it makes refactoring a lot easier in the future but you want their logic like what the what they're trying to achieve their goals are trying to achieve tested via the export of api's and if you can't do that you probably have the unit tests the internal ones so there's internal packages and I don't remember what go version this was introduced like one four one five but if you make a package names internal then any pack I can't explain it in one sentence probably but any any packages sort of under that can only be accessed by packages and folders less than the folder content matter so there's there's there's internal packages they allow you to create a mechanism whereby external people cannot import your internal packages and we like to use this actually as a mechanism true to uncomfortably over packaged things this and so this is where I saw a slide yesterday that said don't over package if anything under package and we used to do that and the issue we ran into terraform as a living embodiment of that issue right now is terraform under package and it's basically impossible right now for us to reflets and tear form without doing it atomically because every time we try to do it incrementally we just get import cycles because everything is tying in to everything and so if you look at vult or something in our forward things we over package from the beginning and we ended up deleting some packages cause it in make sense of packages but we have a much better package breakdown and one of the ways to do this safely without creating public api for for you to promise people anything is to use internal packages that way you know if you have I'm exaggerating here but if you have like 500 internal packages the end user still looks like one so I would say you know do again it's the package boundary thing is so qualitative and so experienced base that it's hard for me to give any hard and fast rules but the package boundary stuff will help testing quite a bit so networking how do we test networking we have all our software and think actually does networking to some extent and so what we want to be able to do is test network connections and so if you're testing networking make a real network connection there's I don't see it actually very awesome anymore so I think we're in a really good spot but early on at least in go code in 2012 and so on I saw a lot of people trying to mock Netcom nez icons an interface that seems like a perfect place to create a mock but there's really no point - mocking epicondyle explained so here's a fully functional I think yeah I think it works connection a helper to create a test connection with a client and server end and all we do is listen we use the operating system to choose our port for us we bind it to localhost so we only we connect ourselves but it's you know just to the forward I don't care you immediately makes a connection the accept you can see is not on a for loop so the moment that we accept the connection we close the listener which does not close the connection we just don't accept anymore and then we dial it at the end and we just return the client server and you could close them on the reverse side and so that was a that was a one connection example it's super easy to make an end connection test helper it's easy to test any protocol this way we actually in hacker for example we have this to test SSH connections the way we test SSH connections is we create a real SSH server we create a listener we connect to it we shut down that stage so we return the two connections we have a real a staged connection it's easy to return the listener if you need the listener it's easy to test multiple networking types ipv6 or UNIX domain sockets or other things it's all there and then rhetorically at the end you know sort of like there's no reason to ever mock Annette con configurability on the how to write testable code side so unconfigured will behavior is very often the point of difficulty for tests as your code is doing something that may totally make sense for production usage so it's not configurable but the tests want to do change the behavior make it faster skip some safety tests adil huh and so what we usually do is over parameter eyes are struck to allow to have to fine-tune behavior and if you really don't want users to do this just make the parameters on the structure unexploited another thing we do and i don't know the i don't think i showed an example is we prefix sometimes these with tests so that people know that their parameter is only to be used with tests but by making them over-over parameterised in the beginning it just makes that a lot easier so here's here's an example like cache pass and port in this example they may always be the same they may never in production they may baby should never change you should still make them configurable because I'm testing you probably do want to change them I don't remember what this is now I'm going to skip that I don't remember what that's there for so another thing we do pretty commonly is just make a big like test bull and they'd comment above it should very very clearly define what behavior that's going to change but the most common way I've seen this used we've used it a couple times so those common way I think is use is to like skip off in a web application or something like say your web applications only mechanism for logging in is ooofff testing or simulating OAuth completely is kind of a bear and so instead of doing that we gives us a test equals true and that'll always Olaf U is the same person you know ah that doesn't actually do any OAuth protocol stuff it just gets the requests that OAuth and immediately pretends you're logged in right away complex structs testing complex trucks struck two values so yeah so the example here would be terraform has a graph structure and the graph structure has a lot of obviously a pointers the children there's data on the nodes themselves and we want to be able to test that one graph equals another graph the blunt instrument that everyone used we use a lot to test complex trucks is reflects deep equal you just use that and verify the structure the same a slightly better approach is there's actually a number of really good third-party libraries now that do strut comparison and generate nice or error messages when they don't equal I'm sure a lot of people here have run into the like our hopefully an hour or less but maybe more our loss plan reflect a DVD people returns false and it's because like you had an int and then the actual type is like in six before and that causes that to fail on it it causes you to rip your hair out because you're dumping they're dumping the structs and you're looking by by buying me like they're the same that same struct and stuff like that so there's better way to do it one thing we do at times not blanket across all complex structure where it makes sense is we use this pattern of test string it's like the go stringer interface but for tests and unexploited so we do this lowercase pestering thing you use bytes buffer bumped or anything to create some human friendly output that's testing the things that matter and then when you're testing that complex thing you actually just compare the strings to each other and a good example where we use this is actually the ter from graph so the graph root or any graphs node has this test string on it when you call test strings we generate a human readable sort of like indented structure to represent the graph because that's what we care about the most when we're testing those things and that's a lot I mean it reflect few people would check too many fields that we don't care if they match and you get into a lot more complex things so for data structures like trees linked lists etc this sort of pattern helps a lot and I said you could use reflect people a third party live we certainly do that in different places and going to be honest that the test stirring thing is pretty blunt like you when people see it it's kind of weird but we've had really good results for complex strokes and here's an example of like the tests Turing output we generate for a graph and terraformed like a simple graph testing a simple single dependency and these are the strings we compare and so when they fail it's actually a lot easier for us to generate dips and failure output that helps versus you know these two graph structures that together have I don't know well this is a simple one but in the more complex terraform test like for example graphs could very easily have two thousand or more graphs nodes in there and a reflective deep equal failure is a nightmare data bug where's this eliezer sub processing this one is another one I specifically remember I got from the go standard library and thought was genius whoever did that in the go standard library so sub processing is a typical point of difficult to test behavior very very typical and there's usually two options when when you're faced with the need to sub process you could either actually do the sub process or you can mock the output or behavior or mock the whole sub processes in general so num as an example like say you're writing an application that interfaces would get and doesn't get status to see what's going on you could either actually execute git and set up a git repository so that get status outputs what you would like it to output or you could create you know some waited you know test configure ability to route around that and just pretend you out but it get and give you some mock data we like to actually execute a sub process but I might not be the binary the real binary that you're executing and so the one option is to just execute the real thing in this case get for real and what we do is just like subprocessor there's no real complexity here but we do guard the test to make sure that the binary exists so as an example like here's something we'll do in a module in it and a package in it we'll do a look pass to just kind of as a best guess to see if gifts available we'll set some global variable and then in the tests themselves we'll just we'll just guard on and skip if it doesn't exist we do this kind of pattern a lot usually to help with like Travis the eye testing or testing in environments that can't support certain types of tests or it's more difficult to support certain types of tests we do this sort of thing the other approach is to mock it and the mocking is where it's a little bit different because we don't ever mock the output we actually always still execute something but we're executing a mock and so to do this the place where your sub processing you need to make the exact command arguments configurable so you could pass a custom one so that you don't hard-code get in there you don't hard-code that there's never environmental variables in there just let let the caller somehow and again it could be a test only on exported field but makes it caller somehow modify it and like I said I found this in standard Lib it's actually how they test OS exact but it's how we test a ton of things so here's here's what it looks like it's it's not obvious having from the beginning but it's also not complicated so what you do is you create this helper process thing which returns an exact command which is going to execute back into the test into a custom sort of entry point what this does is it runs a really specific test called Test helper process it does a dash dash so we could parse some other stuff later we also specify an EM bar to explain that we really want to run this thing and then we construct the command and then on the flip side the actual test itself what it does is a test of that n bar exists and if it doesn't it returns so this is what this way when you run go test it doesn't actually do anything otherwise we use that - - as a sentinel flag to figure out where the actual arcs begins if you were doing get status it would actually turn into this go test runs the main - - status is what would actually end up happening and then we you could do anything after that your your executing your own program of sorts so we usually switch on like argh zero argh left the rest and and do stuff and doing this you could test anything you could just sub-process test anything okay so seven minutes to get through a lot of things so interfaces interface with our important mocking points I'm sure we've heard this before they're not just pluggable points for consumers they're important mocking points for testers when you have an interface you could test that interface by creating a mock one easily so similar to package limb functions it's very hard to create hard and fast rules of when to create an interface it's very easy to over interface things you can't we'll get that over time but I would say create interfaces where you expect alternate implementations and create interfaces where that you feel is the best way to test that thing but overdoing it will complicate the whole thing we prefer to use smaller interfaces where they make sense so even if we have a big interface it's the only functionality we need is the i/o closer then the function that takes that interface we'd rather just take an i/o closer rather than the whole thing this might just be a good practice in general but it's a good practice for testing because it simplifies the interface surface area that has to be implemented to do a test so here's an example a very common example actually common in the standard Lib is that the served comm type method take readwrite closers even though all if you look at the things that call serve con on their own they're always neck home so usually always Netcom but mocking like I showed you like mocking in a mock con or even creating like a small net kong is overkill if you could just create a simple read write closer to test the serve con function so that's an example of lowering the responsibility of an interface it helps with testing quite a bit it also makes using it in actual code easier okay next thing is testing as a public API this is something we only started doing the past 18 months or maybe two years so newer Hofstra projects have adopted the practice of making a testing or testing underscore something file and so you could tell by the name of the file that this becomes compiled as part of the actual exported code it's not just test only code and we actually start export we started exporting api's for the purpose of easing mock creation test writing for consumers of that library and so it allows other people to write test easier so here's one example which is a config file parser for terraform for bulb we have some of these which is we export test configs have configured valid just to easily just give them the structure they need for valid config and an invalid one really basic here's a more complicated one that consumers love right it just makes me a server so test server will return the address to connect to as a client and an i/o closer to clean up the server the place where we have this is both both actually exports a function for you Ingo to create a fully in memory non-durable server that that is volt so you could actually create a vault client connect to it and test communicating with it and it's a publicly supported exported API and then the other way is we actually export functions that test implementations of an interface to see if they behave suspect of what our behavior expects so example is we have a library that downloads things there's no other way to describe it and it just downloads things from anywhere using anything and so you implement the downloader interface in order to create a new downloader and there's some behavior we expect like if we download if we download without the destination directory existing we expect you to create that directory we can't represent that and goes type system so that's an implementation detail it's easy to miss when you're planning is sing and playing a downloader and so we actually export a function called test downloader that runs through a bunch of table test for any generic downloader to ensure that the behavior is what we expect we also export things like mock structures so that if you wanted to pass a downloader maken test that it's downloading the right thing you have that the last thing is we I have this this package called go testing interface and what it does is it creates of testing dot T it actually is called that interface and we use that for the test helpers because if you actually import the real testing package that adds flags to the global flag thing and so if the the package import in your own package uses the global Flags then they'll also have these run and so on Flags introduced and that's annoying so we've used the interface then you could pass it on it so here's an example of that test config we import this we use the interface you can see it's missing the asterisks you could still use it like it's testing that T and you get pass a real testing dot C into it okay we're getting there and custom framework so go test is a really incredible workflow tool so when we do things that aren't quite unit tests we still try to fit it into I don't know are they kicking I sell time I'll keep talking up so we still try to fit these tests that don't oh maybe it's maybe it's my fault actually my computer may have died I don't know I'll just talk through that one and then I'll probably have to end up through that one because I don't have the slides anymore but when we build things that don't quite fit unit tests we still try to build build them into the go test workflow so a good example with this is terraform has an acceptance test library where the excessive stuff library does is it takes real terraform configuration and then from the real terraform configuration it creates real infrastructure and it's black box feet in a way it's mostly black box and we run these every night we spin up thousands of resources on something like 50 different providers every night and we use go tests to trigger this even though they're not unit tests and so what we did was we built our own frameworks that we call from a test that has its own API doesn't look like unit tests at all doesn't call fatal f or any of those things it just basically is a structure that says here the config files to run here are the credentials for the cloud platform here are some you know tests to run like you should be able to reach this IP this if you run this API call then it should return an instance we do that and then we run it with a special flag by introducing flags like I showed you earlier like the update flag we actually have like an acceptance flag and so when you run go test with the acceptance flag then it runs the acceptance test suite skips all the rest and we get features like that in involved it's the same way we involve we have an acceptance test framework for testing secret back ends off back ends and all these different things same things we build a custom frame work to make writing those easy but you still actually keep them with go test that just as a workflow is what we try to do I think there's only two more sections I'm going to put the slides online so I'll end there and I'm out of time anyway so thank you [Applause]
Info
Channel: Gopher Academy
Views: 34,518
Rating: undefined out of 5
Keywords: software development, gophercon, golang, programming
Id: 8hQG7QlcLBk
Channel Id: undefined
Length: 44min 59sec (2699 seconds)
Published: Mon Jul 24 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.