Sunday, November 19, 2023

"Unit" Tests

A survey of the definition of "unit test", taken from references in my dead tree library.

 

I call them "unit tests" but they don't match the accepted definition of unit tests very well. -- Kent Beck, Test Driven Development by Example.

 

Unit testing means that one and only one unit is tested as such. -- Ivar Jacobson, Object-Oriented Software Engineering

 

Unit testing is the testing we do to show that the unit does not satisfy its functional specification and/or that its implementation structure does not match the intended design structure. -- Boris Beizer, Software Testing Techniques.

 

Module testing or unit testing is the verification of a single program module, usually in an isolated environment (i.e. isolated from all other modules).  -- Glenford Myers, Software Reliabililty.

 

The objective of unit testing is to attempt to determine the correctness and completeness of an implementation with respect to unit requirements and design documentation by attempting to uncover faults.... -- IEEE-1008-1987

Thursday, September 28, 2023

TDDbE: How Suite It Is

 As his final bow in this section, Beck writes a new test case for TestSuite.

A couple things stand out here.

First, the notion that TestCase/TestSuite is an example of the Composite "design pattern" is not something that is falling out of the test -- it's an insight that Kent Beck has because he has written multiple xUnit implementations already.  The TestCase code doesn't currently conform to that pattern because Beck was pretending that he didn't know this.

Because he got this far before "discovering" TestSuite, he has a small pile of work to redo - in this case, nothing high risk (toy problem, he has tests, he understands the change, he didn't let the original implementation stray too far from where it was always going to end up, and so on).

That's the happy version - the change happens before the code really starts to ossify.

What this brings to mind for me is Jim Coplien's observation (Beust claims it is an exact quote, but I haven't been able to verify that via the provided transcript) about YAGNI leading to an architectural meltdown. 

Here, we have relatively little investment the old idea, so the cost of change is pretty trivial.  But this example may not be representative of the general case.

Second - are we sure that the design that is emerging here is good?  The story ends in sort of an ugly spot - there's a lot of work left to do, although not necessarily any new lessons.  Don't confuse "these are the things we do" with "these are the results we settle for".

Which I think is unfortunate, in that one of the communication gaps I see is that people don't share the same understanding of how much remove duplication is supposed to happen before you move on.

Possibly interesting exercise: see if you can get to pick your favorite moden python testing framework without binning this work and starting fresh.

TDDbE: Dealing with Failure

Beck switches into a smaller grained test; this introduces a testFailed message, which gives him the permission that he needs to extract the error count and use general formatting to eliminate the duplication in the test summary message.

There is a subtlety hidden inside this method....  However, we need another test before we can change the code.

I don't find this section satisfactory at all.

Let's review: in chapter 21, we started working on testFailedResult, which was intended to show that the failure count is reported correctly by TestResult when a broken test is run.   That test "fails wrong": it exits on the exception path rather than on the return path.  So we "put this test on the shelf for the moment".

We take a diversion to design the TestResult::summary without the TestCase baggage.

All good up to that point - we've got a satisfactory TestResult, and a TestResult::testFailed signal we can use to indicate failures.

So now we unshelf the test that we abandoned in the previous chapter.  It fails, but we can make it pass by introducing an except block that invokes TestResult::testFailed.

However, the scope of the try block is effectively arbitrary.  It could be fine grained or coarse grained -- Beck's choice is actually somewhere in the middle.  TADA, the test passes.  Green bar, we get to refactor.

But he's going to make a claim that we can't change the boundaries of the try block without another test...?

I think the useful idea is actually this: the current suite of tests does not include constraints to ensure that exceptions thrown from setUp are handled correctly.  So we'd like to have tests in the suite that make that explicit, so it should go into the todo list.

What's missing is the notion of test calibration: we should be able to introduce tests that pass, inject a fault to ensure that the tests can detect it, remove the fault, and get on with it.  Of course, if we admit that works, then why not test after...?

So I think what's irritating me here is that ceremony is being introduced without really articulating the justifications for it.

Contrast with this message:

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence -- Kent Beck, 2008

Monday, September 18, 2023

TDDbE: Counting

In general, the order of implementing tests is important.  When I pick the next test to implement, I find a test that will teach me something and which I have confidence I can make work.

Reminder: step size varies with confidence - when things are clear, we set a faster pace.

Note that this isn't the only possible strategy; the "Transformation Priority Premise" proposes that we want to always make the change that is "closest" to where we are now -- with "closest" being determined by a sort of weighted priority function.

What Beck wants to do is implement tearDown.  But it is difficult to test - exception is a bit overloaded here.  So instead of pursuing the difficult implementation, he introduces an intermediary -- separating what is complicated from what is difficult.

Here, that takes the form of a data structure to keep track of what is going on within the framework; to begin, just tracking tests run and tests failed.

What caught my eye here: having introduced a trivial fake implementation, he's immediately willing to "remove duplication" in counting the number of tests run - introducing an initialized value, incrementing the value, and so on.

But he does not choose to do the analogous refactoring on the test failed count, instead sticking with the hard coded value.  "The tests don't demand it."  I'm suspicious of that justification, in so far as with a single test, they didn't demand changing the test count either.

You might be able to make an argument from the idea that we started with an implicit counter for tests running, but there is no implicit counter for failures (because this test doesn't include the failing test code path) and therefore it's appropriate to make the former counter explicit, but not yet for the latter counter.

On the other hand, you probably can justify it via "make the next change easy".

Another point to note: in this first test, TestResult is not isolated from TestCase -- TestResult::testStarted is invoked by TestCase, not by the test.  Beck is including within his observation the behavior of TestResult and also the protocol shared between TestResult and TestCase.


Thursday, September 7, 2023

TDDbE: Cleaning Up After

Doing a refactoring based on a couple of early uses, then having to undo it soon after is fairly common.  Some folks wait until they have three or four uses before refactoring because they don't like undoing work.  I prefer to spend my thinking cycles on design, so I just reflectively do the refactorings without worrying about whether I will have to undo them immediately afterwards.

I find myself wanting to push back on this a little bit.  I don't mind refactoring the measured code on the regular - we learn as we go, and that's fine.  

 I fret a little bit about refactoring the measuring code; we red-green when the test is new, which gives us additional confidence that the test is measuring what we think it is, but if we are continuing to tinker with the guts of the test we should be recalibrating from time to time.

Which is to say, we red-green on the theory that its not enough to assume that we're measuring what we think we are, we need additional evidence.  It seems inconsistent to assume that we can keep evolving the test on the assumption that it still measures what it did before.

Spying on the calls as the framework runs the test is a clever idea, and of course there's no reason that the telemetry needs to be so complicated as an "object".

Got myself really twisted up looking at the final code, until I realized that the code in the text doesn't necessarily match the code he's narrating -- the code in my edition of chapter 20 includes some artifacts that won't be introduced until chapter 21.

Nonetheless, I find myself thinking again that the point is to catch mistakes, not to do The Ritual flawlessly.

A git history with each change following along with the text might be an interesting exercise.  Or for this specific example, maybe a jupyter notebook kind of thing.

 

Wednesday, September 6, 2023

TDDbE: Set the Table

 When you begin writing tests, you will discover a common pattern...

I'm not a fan of Arrange-Act-Assert myself; (Arrange-Act)-Assert would be better -- we shouldn't entangle taking a measurement with checking that the measurement is satisfactory.

But arrange, wherever you put it, is likely to get re-used; we often have many instances of tests that are using the same constellation of objects to produce a measurement.

And if you are building an xUnit, then your customers are going to are going to expect that the framework will work a particular way, including having implicit facilities for arranging object constellations.  So be it.


Tuesday, September 5, 2023

TDDbE: First Steps to xUnit

 Hey, we're in python!

We need a little program that will print out true if a test method gets called, and false otherwise.

Two things strike me with this exercise.

First, it reminds me of the practice of working through the "imperative shell" to discover where the boundary is going to be between the testable core and the humble object.

Second, it reminds me that this part of the exercise tends to be notable for its lack of ambition.  It would be easy to imagine, for example, what the finished console output of running the test should be, and refactor the test toward that end.  Instead, Beck starts with a trivial output, applying the Guru Checks Output pattern on the early runs.

Having created code that produces the correct answer (for this one trivial case), Beck now sets to work refactoring the exercise - beginning the march toward the design that is currently in his head (he may be taking baby steps, but the section title is "The xUnit Example" - he's working toward a specific framework).

We're not really looking at a test "driving" the design, at least not yet.  Right now, the test is just the ratchet, ensuring that we don't lose ground as the code is cleaned up.