when i write tests, i do it because writing tests lets me make more features faster. that's the only reason.— Michael D. Hill (@GeePawHill) April 11, 2017
Writing tests first forces you to think about the problem you're solving. Writing property-based tests forces you to think way harder.— Jessica Kerr (@jessitron) April 25, 2013
The tension between these two ideas drives me nuts. Thinking way harder means that I'm not delivering features faster.
Example based testing is straight forward; choose an input, hard code the output, remove the duplication, repeat until you have no more examples that produce the wrong output. You may even be able to estimate the minimum required number of tests just by thinking about the cyclomatic complexity of the problem.
But this in turn means that you can't easily judge "complete" just from looking at the demonstrated examples. As Scott Wlaschin points out, a finite suite of example tests can be beaten by a pathological implementation that is fitted to the suite.
Property based tests handle this concern better -- they explore the domain, rather than just sampling it. That's a lie, of course; what property based tests are really doing is sampling a lot of the domain -- enough that the pathological fake is more expensive than just solving the problem.
My most startling test result, ever, came about from a property test which revealed that I had completely failed to recognize that the properties that I thought would hold were not consistent with the implementation I had chosen.
But it didn't come from randomly exploring the space, but rather choosing an example from the domain that I recognized was "weird". Fortunately, there were lots of weird values in the domain, and they all demonstrated that my implementation didn't support the property in question. So I got
"lucky" without having to write four billion tests.
I'm not at all sold on the idea of using a random input generator to find a needle in the haystack.
No comments:
Post a Comment