What does Test Driven Development look like when you are staring at a blank page?
I was reminded again this week that there a lot of different approaches that one might use, and they don't all answer that question the same way. So let's try a better question: what does Test Driven Development look like when I am staring at a blank page.
It'll help to have a specific example to work from, so lets consider something like a model for a calculator app; something that will eventually have buttons for input, and a display for output. The kinds of tests that we expect to end up with ask questions like "after I push buttons in this sequence, what information is on the display?"
You will, I hope, recognize that this is a "toy" problem. It's not very big. We don't need to worry about integrating with anything else. The domain is general and familiar. We can probably make a fair bit of headway by starting with a small number of "buttons", and then extending our model to support a "scientific calculator" or a "programmer calculator".
Furthermore, I'm going to whistle on past the open issues of how "button presses" become inputs to the model, or how outputs from the model appear on the display. So out of the gate, before I've even written anything down, I'm carving up the bigger problem into modules, and exercising judgment about which are "important".
But the page is still blank. Now what?
And if we were stuck more than a minute, I'd stop and say, "Kent, what's the simplest thing that could possibly work?" -- Ward Cunningham, 2004
My immediate goal is to crack through the analysis paralysis and writers block to get something/anything into play.
Two features here: first of all, because this is a "programmer test", I'm going to reach immediately for whatever language I plan for the production code. That's one less thing I need to worry about as I context shift between design and checking for mistakes.
The second is that my design criteria is "easy to type". I don't (yet) need to worry about whether I want to decouple these tests from a specific implementation. I don't (yet) need to worry about whether I want to separate the specification from the test framework (if any). I don't (yet) need to worry about code style, or physical design. I'm just boosting myself past the point of static friction.
Choosing something other than a trivial behavior is common at this point. I normally get away with it because faking a complicated behavior is not significantly harder than faking a simple behavior, and what I get in exchange is a chance to experience describing a more complicated check, so that I don't get deeply invested in the wrong interfaces.
Now we have code on the page, and the "RED" task is happening, and I can fuss over things like getting arguments in the right order for my test framework calls, and do I want to use a different representation of the data to make the intent of the test clearer, and are the reports we get when the test fails what we expect them to be, and so on.
There's a bunch of saw sharpening that makes sense now; after you have real code on the table to argue about, but before you are deeply committed to the specifics of the design.
Or we can judge that this design should be considered disposable, with the expectation that it is going to just act as a place holder until we gathered more evidence about what the longer lived test design should look like
And when we finally bored with the pre-flight rituals? Fake it to get the green test to red in a minimal about of all clock time (we already know that's going to be easy to type because we've written the same expression in the return statement). And get the hustle on.
No comments:
Post a Comment