Beck switches into a smaller grained test; this introduces a testFailed message, which gives him the permission that he needs to extract the error count and use general formatting to eliminate the duplication in the test summary message.
There is a subtlety hidden inside this method.... However, we need another test before we can change the code.
I don't find this section satisfactory at all.
Let's review: in chapter 21, we started working on testFailedResult, which was intended to show that the failure count is reported correctly by TestResult when a broken test is run. That test "fails wrong": it exits on the exception path rather than on the return path. So we "put this test on the shelf for the moment".
We take a diversion to design the TestResult::summary without the TestCase baggage.
All good up to that point - we've got a satisfactory TestResult, and a TestResult::testFailed signal we can use to indicate failures.
So now we unshelf the test that we abandoned in the previous chapter. It fails, but we can make it pass by introducing an except block that invokes TestResult::testFailed.
However, the scope of the try block is effectively arbitrary. It could be fine grained or coarse grained -- Beck's choice is actually somewhere in the middle. TADA, the test passes. Green bar, we get to refactor.
But he's going to make a claim that we can't change the boundaries of the try block without another test...?
I think the useful idea is actually this: the current suite of tests does not include constraints to ensure that exceptions thrown from setUp are handled correctly. So we'd like to have tests in the suite that make that explicit, so it should go into the todo list.
What's missing is the notion of test calibration: we should be able to introduce tests that pass, inject a fault to ensure that the tests can detect it, remove the fault, and get on with it. Of course, if we admit that works, then why not test after...?
So I think what's irritating me here is that ceremony is being introduced without really articulating the justifications for it.
Contrast with this message:
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence -- Kent Beck, 2008
No comments:
Post a Comment