Friday, December 6, 2019

Banking: Skip to the Owl

I've been looking at the Codurance Banking exercise this evening:

Given a client makes a deposit of 1000 on 10-01-2012
And a deposit of 2000 on 13-01-2012
And a withdrawal of 500 on 14-01-2012
When they print their bank statement
Then they would see:

Date       || Amount || Balance
14/01/2012 || -500   || 2500
13/01/2012 || 2000   || 3000
10/01/2012 || 1000   || 1000

which is introduced with an additional interface constraint
public interface AccountService
{
    void deposit(int amount); 
    void withdraw(int amount); 
    void printStatement();
}

Here are the things that I see when I look at this problem.

First, I notice that we have dates. That implies that somewhere I'm going to need a calendar or a clock, because my interface doesn't have way to provide a date. Similarly, I'm going to need some sort of output device that accepts the statement. I'm going to need some way to capture the dates and amounts so that they are available to be printed - a storage data structure of some form. And I'm going to need some functional compute to actually do the work.

Let's draw the rest of the owl!

class TheRestOfTheOwl implements AccountService {
    interface StatelessCore {
        void deposit(int amount, Clock clock, Storage storage);
        void withdraw(int amount, Clock clock, Storage storage);
        void printStatement(Storage storage, Console console);
    }
    
    private final Clock clock;
    private final Storage storage;
    private final Console console;
    private final StatelessCore core;

    TheRestOfTheOwl(Clock clock, Storage storage, Console console, StatelessCore core) {
        this.clock = clock;
        this.storage = storage;
        this.console = console;
        this.core = core;
    }

    @Override
    public void deposit(int amount) {
        core.deposit(amount, clock, storage);
    }

    @Override
    public void withdraw(int amount) {
        core.withdraw(amount, clock, storage);
    }

    @Override
    public void printStatement() {
        core.printStatement(storage, console);
    }
}

You could write tests for that, but I don't see how you are going to get a lot of return out of that investment.

Now, let's suppose you didn't have the insight that Storage is a separate concept from the working core; you might have guessed that you could keep everything in an in-memory data structure, for example, because in your independent tests the lifetime of "the" account is coincident with the test universe itself. So you might start with a different owl:

class SimpleOwl implements AccountService {
    interface StatelessCore {
        void deposit(int amount, Clock clock);
        void withdraw(int amount, Clock clock);
        void printStatement(Console console);
    }

    private final Clock clock;
    private final Console console;
    private final StatelessCore core;

    SimpleOwl(Clock clock, Console console, StatelessCore core) {
        this.clock = clock;
        this.console = console;
        this.core = core;
    }
    
    @Override
    public void deposit(int amount) {
        core.deposit(amount, clock);
    }

    @Override
    public void withdraw(int amount) {
        core.withdraw(amount, clock);
    }

    @Override
    public void printStatement() {
        core.printStatement(console);
    }
}

Riddle: what happens now when you discover that storage should be a separate component?

One possible answer is to recognize that SimpleOwl is simple enough to throw away when it isn't useful any more.

Another possibility is to introduce an adapter....

class Adapter implements SimpleOwl.StatelessCore {
    final Storage storage;
    final TheRestOfTheOwl.StatelessCore core;

    Adapter(Storage storage, TheRestOfTheOwl.StatelessCore core) {
        this.storage = storage;
        this.core = core;
    }

    @Override
    public void deposit(int amount, Clock clock) {
        core.deposit(amount, clock, storage);
    }

    @Override
    public void withdraw(int amount, Clock clock) {
        core.withdraw(amount, clock, storage);
    }

    @Override
    public void printStatement(Console console) {
        core.printStatement(storage, console);
    }
}

Note that this illustrates that "Stateless Core" is a really lousy interface name; Adapter is a core that has state. Naming is one of the two hard problems.

You'd see a similar scramble when you realize that the description of your use case doesn't have any sort of representation of a unique account identifier to ensure that you retrieve the correct information from shared storage.

Thursday, November 14, 2019

Refactoring: Reads Before Writes

One of the advantages of practice coding is that it gives you permission to pause your progress and investigate process smells; did I just make a mistake?

Today, I was working on a refactoring exercise; I had code, and tests in place, and all of my tests are passing.  I made a change, ran the tests, and tests failed.  Startled, I reverted the change.  The tests are now passing again.

But I had to stop here, because I happened to notice a problem: the change that I reverted to pass the test was correct.

What had happened?  Well, that's easy -- one of my earlier edits included a mistake, because of the mistake, the behavior of the code was incorrect.  But the test didn't detect the mistake at the point that it was introduced.  Instead the test appeared later.

This concerns me, because one of the illusions that I carry around with me is that there is a payoff in TDD that you spend less time in the debugger because mistakes are caught when you make them.   Test-commit-revert is, to some degree, founded on this idea -- that you can discard the mistake by simply reverting the code.

Assuming, for the moment, that all other things are equal, I prefer habits that produce short intervals between the mistake and its detection to those that produce longer intervals.

So, what happened today?


I was working on a greenfield project; write a test, make it pass, clean it up, repeat. And my "imagined design" was not expressed in the code. The changes I want to introduce appear to be getting harder, so I'm interrupting my "progress" to make the next change easy.

My imagined design is a finite state machine, and I'm trying to organize the code in a way that introducing a new state (with new behavior) will be easy.

I'm writing a game, and tracking the token as it moves from the starting square to the second square; the tests verify the descriptions of the squares against a fixed transcript.  The test that I am anticipating moves the token back to the original square, with a different description than was initially displayed.

My thought was to introduce a predicate which is always true (preserving the current behavior), and then to introduce the new test which would pass if the predicate were false.  Thus, I would meet one of my goals: minimizing the time required to complete the "Green" task.

And, having practice that approach several times this morning, it works fine.

But not the first time that I tried it.

The first time through, there were two differences in the technique that I used which both contributed.  One difference was that I typed my code, rather than using the refactoring tools.  That made it possible to introduce an error - a line of code I thought was assigning a value to a variable was in fact a no-op.

The other difference is that I started out my refactoring by introducing the writes; creating the new variable, assigning the data to it.  I introduced the error during this phase, but because the variable is not yet being read anywhere, the coding error I've introduced has no impact at all on the behavior.  So all of the tests continue to pass.  When I introduced the new variable into the predicate, now the bug appears.

On the other hand, on the interations where I started from "If True", mistakes were detected at the moment I introduced them.

It felt a bit like TDD as if you meant it: if True is trivially correct, then true is rewritten into an equivalent comparison between two literal values, not we extract variable on one of those literals, and then that variable can be moved around in the source code.  At each mistake, a simple revert/undo would take me back to not merely a passing state, but actual bug free code that passed its tests.

Another way of considering this lesson is that we should start as close to the actual constraints of the tests, and work backwards from there, gradually moving our changes into the parts of our implementation where we have more freedom.

It's similar in nature to starting with a hard coded return value, and then removing duplication as your refactor your way toward the function arguments.

Sunday, November 10, 2019

TDD: Transport Tycoon, Exercise 1


I decided that I wanted to take a quick swing at this as a TDD exercise...

Because what we are being presented is the behavior of some pure function, I used a facade as my test subject -- a single function accepting a string argument and returning an integer.  Behind that facade I can refactor my way towards a fully buzzword compliant domain model, but these behavior tests aren't coupled to the model.

That means, incidentally, that these tests aren't some beautiful enduring artifact that describes the model in exquisite detail; they are disposable scaffolding tests.

Because I wasn't sure where I was going, I simply implemented everything in a straight forward way within the facade.  I went three examples in with `if` statements before I started trying to tease out the implicit duplication of the model.  Eventually, all three branches turned into the same code, and then they were simplified to remove that duplication.

It became clear in working on some of the longer problems that I really wanted to have more confidence in the intermediate state.  That insight led me back to the idea that I wanted a "pure function" that could handle each piece of cargo one at time, so that I could track the evolution of the system at each step.  In theory, such a thing can be created via refactoring, but since I wanted confidence in the implementation, I decided to perform a separate TDD run on just that piece, and then verified that the tests against the original facade continued to pass when I applied the refactoring.

When that refactoring was complete, the implementation behind the facade was broken into two pieces -- a state machine to manage the bookkeeping of my "fleet", and pure function that computed transitions from one state to another.

I deliberately declined to maintain CQS discipline; separating the queries from the commands in my bookkeeping component appeared to be ceremony with no particular payoff in the exercise.


Friday, October 25, 2019

Following a better path

I intend to follow a better path, now that @marlenac has brought it to my attention.

When you are able to condense your software expertise into a practice that takes <10min and a relative beginner can mimic, and repeating that exercise thousands of times brings fresh insight to everyone at every level, you can call it a code kata. -- @jtu

The good TDD exercises do offer fresh insights at many levels, but less face it; the creation of these exercises doesn't demonstrate a lifetime of mastery so much as a clever idea that survived a few rough drafts.  Maybe.

The time limit is a really interesting constraint -- much of my complaints with the TDD exercises that I've found is that the scope of them is much too brief; that an hour of exploration isn't enough time to have the problems that TDD purports to solve.


 

Friday, October 18, 2019

Refactoring Lessons at the Gilded Rose

The month, the local meetup took on a variant of the Gilded Rose kata.  As I was in a facilitator's chair,  I didn't get to explore any new ideas this time around.  So I decided to work through the exercise this evening.

In doing so, I picked up a couple interesting tricks from Intellij IDEA.

First, I learned that IntelliJ understands the idea of "run all tests in this package", which saves some of the headache of creating a regression suite when your tests are spread out across multiple files.

Second, I learned that IntelliJ can be quite clever about reducing predicates if it has a clear idea which invariant holds.  In various methods I introduced, an assert that described the preconditions for the Strings that were in scope allowed intellisense to remove a bunch of redundancy in the conditionals.

Either it wasn't entirely clever, or I made some bad choices in chosing which refactoring option to take when presented with a choice, but in a number of cases I ended up with ifs in front of empty blocks.  IntelliJ was also able to perform the refactoring to remove them, but I had to ask.


The grand strategy wasn't quite what I had expected it to be.  Back in the "Look, Ma, no hands" era, we tended to focus on micro refactorings, from which some Platonic ideal design would eventually emerge.  But for the Gilded Rose problem, the game seems to be to origami the code into a shape that intellisense recognizes.  So that means a bit of preparatory judo followed by swinging large blocks of code around so that the static code analysis can consider one domain problem at a time.

In short, instead of chasing a good design, I'm first chasing a design that the machine understands well enough to manipulate and simplify without requiring that I type anywhere near the complexity.  Let's face it, the carbonware is barely qualified to type at all -- its role is to point, and let the sand do the dirty work.

Tuesday, October 1, 2019

TDD: Safety in Numbers - a Bowling Game Adventure

Last night, I decided to work through a bowling game exercise, but it didn't quite turn out as I had expected.

The goal was eliminate duplication; how much intention revealing code could I introduce before moving onto the second test?

As suggested by Uncle Bob, I started with the degenerate case:

It is, of course, trivial to get this test passing. We simply hard code the required answer into the score method.

The step that I expected to follow was to immediately start introducing domain concepts, like frames, into the production code while there was still but a single test constraint in place.

And it was a very uncomfortable experience - I realized fairly quickly that a simple pass signal wasn't enough to give me confidence that the match I was introducing was actually manipulating the figures correctly -- not enough to give me confidence that I wasn't introducing silly fence post errors.

After discarding the work and contemplating the ceiling for a time, I decided that I was having trouble because zero is the additive identity -- I couldn't look at my actual result and deduce how many numbers had been added together, because 10, 20, 100 zeros all sum to the same amount.

This evening, I tried a different initial test:

The results are much better - fence post mistakes change the observable behavior in this circumstance, and are therefore easy to catch. The deviations from the expected results give an immediate hint at the error. We know from taking small steps which edit introduced a fault, but the distinct behaviors mean that it is easy to recognize the precise nature of the fault.

In this evenings ending, I managed to get all the way to make the next change easy: using the gutter game as my second test produced a trivial pass, because the faults I introduced during refactoring had already been detected and mitigated.

Saturday, September 7, 2019

TDD: On Fake Code

This past spring, David Tanzer published a short essay on transitioning from fake implementations to real ones.
When you do TDD, “fake implementations” or “wrong code” are OK, as long as they pass all the tests you have so far
But when do you stop to fake? When do you start writing “real code”?
Tanzer is using this as a stepping stone to introduce Uncle Bob's heuristic: as the tests get more specific, the code gets more generic.

But there is another answer, which I eventually learned from a comment written by Kent Beck:
Do you have some refactoring to do first?
Here is Tanzer's passing implementation:
And that's fine for our test calibration; we have successfully demonstrated that the test can distinguish the correct behavior from an incorrect behaviod in this specific case.

But... the current implementation implicitly describes two pieces of domain knowledge that we can make explicit.
  • The length of the hint should be the same as the length of the secret word.
  • The initial representation of the hint should conceal all of the letters in the secret word, which is to say it should be entirely composed of the unrevealed letter token "_".
We don't have to wait for permission to introduce these ideas; they are always going to appear in a refactoring step, so we can cut to the chase and introduce them immediately.

From there, we might notice that the secretWord we are using in the hint method is the same that was passed to the constructor, and extract that duplication. Or we might decide that the creation of the hint of the correct length is a single idea that can be extracted into another function, and do that.

You can start writing the real code as soon as you have a green bar.

Because I was reviewing Saff and Boshernitsan today, I have been thinking about Beck's Money demonstration.  Translated into Python, Beck's first test looks like

Riddle: what's the simplest implementation that will pass this test? There are probably several different answers, but the simplest I can come up with looks like:

No implementation, no variable names. Just 10. It's clear to me that this is "wrong code", in Tanzer's sense. But we don't need more tests to make it better, we can immediately refactor (in Beck's sense) to restore sanity to the implementation.

If we were being very small and deliberate in our refactoring, the refactoring sequence might look like:
Like "triangulation", small and deliberate steps are not required - they are a technique to practice so that you can get small when larger steps aren't working.