Gotta have patterns if you want to be published to this audience.
Why does test the noun, a procedure that runs automatically, feel different from test the verb, such as poking a few buttons and looking at answers on the screen?
I find this parallel more useful when talking about design the noun vs design the verb, especially within the context of a practice that promises to improve design.
Beck's discussion of "isolated tests" is really twisted up, in that this heading includes two very different properties that he wants:
- Tests that are order independent
- Tests that don't overlap (two tests broken implies two problems)
I have seen people get really twisted up on the second property, when (within the context of TDD) it really isn't all that important: if I'm running my tests regularly, then there are only a small number of edits between where I am now and my last known good state; it doesn't "matter" how many tests start failing, because I have tight control over the number of edits that introduced those problems.
A trivial example: I'm refactoring, and I make a change, and suddenly 20 tests are failing. Disaster! How long does it take me to get back to a green state? Well, if I revert the changes I just made, I'm there. It really doesn't matter whether I introduced one problem twenty - fixing everything is a single action and easy to estimate.
The case where I care about being able to estimate the number of real problems? Merge.
Isolating tests encourages you to compose solutions out of many highly cohesive, loosely coupled objects. I've always heard that this was a good idea....
I'm still suspicious of this claim, as my experiences is that it delivers "many" far more often than it delivers either "highly cohesive" or "loosely coupled".
I think of Beck's justifications for the test list as paging information out of (human) memory (I wrote them down in my diary so I wouldn't have to remember). What I hadn't recalled (perhaps I should have written it down) is that in Beck's version he's not only including tests, but also operations and planned refactorings. The Canon version ("test scenarios you want to cover") is closer to how I remember it.
Test First: "you won't test after" - Beck's claim here is interesting, in that he's talks of the practice as primarily about stress management (the "virtuous cycle"), with the design and scope control as a bit of energy to keep the cycle going.
I need to think more about scope control -- that benefit feels a lot more tangible than those asserted about "design".
I find assert first interesting for a couple of reasons. First, it seems clear to me that this is the inspiration for TDD-As-If-You-Meant-It. Second, the bottom up approach feels a lot like the technique used to "remove duplication" from early versions of a design (if you aren't caught in the tar pit of "triangulation").
I don't find it entirely satisfactory because... well, because it focuses the the design on what I feel should be an intermediate stage. This demonstration never reaches the point where we are hiding (in the Parnas sense) the implementation details from the test; that idea just wasn't a thing when the book was written (and probably still isn't, but it's my windmill, dammit.)
Never use the same constant to mean more than one thing,
This is a sneaky important idea here; fortunately the cost of learning the lesson first hand isn't too dear.
Evident Data makes me suspicious, because I've been burned by it more than once: broken code that passes broken tests because both components make the same errors translating from domain arithmetic to computer arithmetic. The idea ("you are writing tests for a reader, not just the computer") is an important one, but it's expression as described here has not been universally satisfactory.