Monday, August 26, 2024

TDDbE: Testing Patterns

When I write a test that is too big, I first try to learn the lesson.  Why was it too big? What could I have done differently that would have made it smaller? ... I delete the offending test and start over.

Delete and start over is a power move, and will later lead to the exploration of Test Commit/Revert.

This reminds me of Mikado method: that we might want to systematically reset and explore smaller goals until we get to an improvement we actually can make now, and then work our way back up the graph until the code is ready for the big test.

I think you have to be fairly far along in the design to start running into this sort of thing; when the design is shallow, you can usually just drop in a guard clause and early return, which gets the test passing, and now it is "just" a matter of integrating this new logic into the rest of the design.  But when the current output is the result of information crossing many barriers, getting the guard clause to work can require rework across several boundaries.

How do you test an object that relies on an expensive of complicated resource?

Oh dear.

OK, context: Beck is writing prior to the development of the pattern language of Test Doubles by Gerard Meszaros.  He's re-purposing the label introduced in the Endo-Testing paper, but it's not actually all that great a fit.  It would be better to talk about test stubs (Binder, 1999), or doubles if you wanted to be understood by a modern audience.

Anyway, we get a couple different examples of stubs/doubles that you might find useful.

Leaving a test RED to resume at the next session is a pretty good trick; but I think you'll do better to treat the test as a suggestion, rather than a commitment, to resume work at a particular place - the most important item to work next might change between sessions, and the failing test will be a minor obstacle if you decide that refactoring is worthwhile.

 

 

Saturday, August 24, 2024

TDDbE: Red Bar Patterns

On getting started - look for a one step test:

Pick a test that will teach you something and that you are confident you can implement.

What he argues here is that the metaphorical direction of our work is from known to unknown

Known-to-unknown implies that we have some knowledge and experience on which to draw, and that we expect to learn in the course of development.

Pretty good stuff there.  But also I can't help but observe that we often know more about the outside/top of our implementation than we do about the core/bottom.

Beginning with a realistic test will leave you too long without feedback.

I'm not quite sold on this one, certainly not universally.  The mars rover exercise, for instance, feels straightforward when approached with a "realistic test" first.  That may be a case of context - for the rover, choosing a representation of the input and a representation of the output, these are not particularly difficult choices unless one gets the itch to couple the implementation of the test tightly to the implementation of the solution.  Also, the rover exercise includes a specific complicated example to consider as the first test - the specific example where you are given the correct answer.

For a problem like polygon reduction, there may not be a specific complicated example to consider, so one might as well extract what value one can from a trivial example first, and then further extend.

The original polygon reduce discussion (Tom Plunket, 2002-02-01) is currently archived by Google.

Beware the enthusiasm of the newly converted.  Nothing will stop the spread of TDD faster than pushing it in people's faces.

Prophecy? sounds like prophecy.

Another argument in favor of checklists: as a defense against analysis paralysis - get the new idea put down on the test checklist, then put it down.

Also, throw away the code and start over is a big power move - I love it.  If it's not going well, declaring bankruptcy is a strong alternative to continuing to dig.