Sunday, September 9, 2018

TDD: Fractions

Come back tomorrow night. We're gonna do fractions -- Tom Lehrer

Yesterday, I happened upon J. B. Rainsberger's World's Best Intro to TDD.  Series 1 features a demonstration of a fractions kata - what does it look like to produce an implementation of Fraction::add, "one test at a time"?

J. B. follows the usual program of TDD:
  1. Think
  2. Write down examples that may prove interesting
  3. While more interesting examples remain unchecked:
    1. Choose the example that can be implemented with the least work
    2. Make that example pass
But a couple things caught my attention in the first parts of his demonstration; he kept talking about scaffolds, and whether or not it was time to implement equals.  It's off, in a way that I didn't fully grasp yet, so I decided to dig in.


Reading is Fundamental

Write only databases aren't very interesting
One problem that I struggled with for a long time when I was working on Greg Young's probability kata; how do you verify correctness?

You can't rely on the system under test to tell you, because all systems function as implemented. A universal property of all implementations is that if you ask them, "Do you implement your specification correctly?", they answer "yes". Even the buggy ones.

It is shaped, sir, like itself, and it is as broad as it hath breadth. It is just so high as it is, and moves with its own organs.

We don't care about what's going on inside the black box -- what we care about is whether or not a system can rely upon the black box to uphold its contract, and the answer to that lies in independent confirmation.

This was one element in Rainsberger's presentation that seemed not quite right -- his allusions to Fraction::equals.   That may likely be an important element in the final form of Fraction - indeed, in Java, Fraction necessarily inherits an implementation from Object.equals. But testing the equals contract is a lot of work, and it is work that we can defer.

Why can we defer it? Because, for add(), the important question is whether or not it gives you a correct answer, not whether or not it is equal to some other correct answer.

That's not to say that we always have to make the apple pie from scratch; if we have a suite of tests that independently verify Fraction::equals, then I think we can rely upon it to constrain Fraction:add.  But we don't need it.  Not yet.

Crossing the Boundary

At the boundaries, applications are not object oriented.  Mark Seemann, 2011.
On the happy path, there is a temporal boundary between our automated checks and our production code.  Not only does the code for these pieces typically reside in different modules, but they also live in different times -- we are testing the version of our production code that is now, but the test itself is frozen at its last calibrated representation.

The system and the test are sharing contract and schema at a micro level.

Jay Fields, in Working Effectively with Unit Tests, offers Expect Literals as a readability guideline.  I fully concur with the guidance; not because I'm sold on Jay's motivations here, but because domain agnostic literals are the kinds of schema I want to be sharing anyway -- readability arguments just give me another arrow in the quiver.

But from my point of view, the most important reason for using domain agnostic representations is that it allows verification independent of the system under test.

That means that the contract between the checks and the system must necessarily specify how to read and write data across the boundary.

These are conflicting interests here.  If the tests are isolated, then representations of state are going to be expressed as primitive types in our implementation language.  But we normally prefer that, within our functional core, we use domain specific values.  So where does the transition happen?

In an idealized solution, we deal with primitives only at the thin boundary of the system.  But isolated tests create many small boundaries close to the modules of the system.  In many cases, the "simplest thing that can possibly work is going to be to pass primitives across the boundary.

It can be done - we start with a suite of tests that talk to a primitive interface, refactor the production implementation to introduce the appropriate domain concepts, and thereby tease apart the core implementation from the primitive adapter.  Then, when we lift the semantics for constructing the domain values into the public API, we can deprecate and slough away the primitive obsessed interfaces.

That sounds like a lot of ceremony when our focus is designing new production code.
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence -- Kent Beck, 2008
There doesn't seem to be a lot of TDD literature that describes the exercise of eliminating the test scaffolding.

On Rational Numbers

An important thing to understand about rational numbers is where they come from.

The Feynman Lectures on Physics includes a lecture on Algebra. Addition, multiplication, exponentiation are all reasonably straight forward - we put counting numbers in, we get counting numbers out. But the inverse operations create problems; we write some equation with counting numbers for all of the knowns, and the solution to the missing value is not a counting number.

Rational numbers are required to invert multiplication: 4 / 2 = 2, 6 / 2 = 3, 5 / 2 = ??

Another way of expressing this idea, is that the definition of a rational number is the equation. 5/2 is the rational number that, when multiplied by the rational number 2, gives the rational number 5. And that's it, that's all you know about it. All of the other properties are derived from this definition, and the constraint that the manipulation of the counting numbers has to be consistent with that algebra.

Thus, checks of whether a rational is "correct" are checks of whether the rational satisfies the equation that it is supposed to, which in turn is a check for the numerator and denominator.

There are really only two checks you can make.  You can check for "zero"; rationals satisfy this check when the numerator is zero and the denominator is non zero.  And you can check that the numerator and denominator have the correct proportion - via multiplication.

You multiply the actual numerator by the expected denominator, and the actual denominator by the expected numerator, and compare the products to each other -- as counting numbers.
What about addition?  Well, if we know the equation that is solved by our first fraction, and the equation that is solved by our second fraction, then we can use those two equations to create a third equation that is solved by the sum of the two fractions...


Properties of Fraction Addition


Scott Wlaschin's introduction to property based testing covers the basic properties of addition fairly well.
  • Addition commutes: a + b === b + a
  • Addition associates: (a+b) + c === a + (b+c)
  • Addition has an identity: a + 0 === a
There is one other property for fractions that we need to consider -- fractions extend integers:
  • Extension: fraction(a) + fraction(b) === fraction(a+b)
This last property tells us right away that fraction(0) is an additive identity.

This property also implies that we really ought to be able to write an adapter for our fractions, and pass that adapter as the system under test to a suite of automated checks for integer addition.

I'm not entirely comfortable with those properties alone.  To choose a pathological example; if all fractions were fraction(0), all of the addition properties would be trivially satisfied.  So I would insist on having a different paths, same destination check to ensure that the numerator and denominator were in the correct proportions.

Compromises in the Digital World

All non-trivial abstractions, to some degree, are leaky.
-- Joel Spolsky, 2002

One small headache to keep in mind: rational numbers are an extension of the integers. As such, there are infinitely many of them. Which is an infinite number too many if we are restricted to a finite number of bits.

That in turn means that, somewhere along the line, we're going to need to address overflow, underflow, and rounding.

The exceptions described by IEEE 754 are an introduction to the sorts of things you need to worry about.  In particular, inexact results make a mess out of what might otherwise be identities.

Again, I think there's a big gap in the literature here -- what does TDD look like when the problem you are trying to solve gets harder as you get deeper into the weeds?

Links

No comments:

Post a Comment