Monday, August 27, 2018

TDD: Writing Bad Code

David Tanzer writes of his students asking why they are supposed to write bad code.

He attributes the phenomena as a consequence of the discipline of taking small steps.  Perhaps that is true, but I think there is a more exact explanation for the symptoms that he describes.

After creating a failing test, our next action is to modify the test subject such that the new test passes.  This is, of course, our red/green transition.  Because green tests give us a measure of security against various common classes of mistakes, we want to transition out of the deliberately red state as quickly as possible. 

Quickly, here, is measured in wall clock time, but I think one could reasonably argue that what we really mean is smallest number of code edits.

But I think we get lost in the motivation for the action -- although the edits that take us from red to green are in the implementation of the test subject, the motivation for the action is still the test.  What we are doing is test calibration: demonstrating to ourselves that (in this moment) the test actually measures the production code.

At the point when our test goes green, we don't have a "correct" implementation of our solution.  What we have is a test that constrains the behavior of the solution.

Now we have the option of changing the production implementation to something more satisfactory, confident that if a mistake changes the behavior of our test subject, that the test will notify us when it is next run.  If the tests are fast, then we can afford to run them even as frequently as after each edit, reducing the interval between introducing a mistake and discovering it.

To some degree, I think the real code comes after test calibration.  After the test is calibrated, we can start worrying about out code quality heuristics - transforming the implementation of the test subject from one that meets the minimum standard to one that would survive a code review.

No comments:

Post a Comment