Tuesday, December 8, 2020

Jason Gorman: Outside In

Jason Gorman recently published a short video where he discusses outside in TDD. It's good, you should watch it.

That said, I do not like thing.

So I want to start by first focusing on a few things that I did like:

First, it is a very clean presentation.  He had a clear vision of the story he wanted to tell about his movement through the design, and his choice to mirror that movement helps to re-enforce the lesson he is sharing.

Second, I really like his emphasis on the consistency within an abstraction; that everything within a module is at the same abstraction level.

Third, I really liked the additional emphasis he placed on the composition root.  It's an honest presentation of some of the trade offs that we face.  I myself tend to eschew "unit tests" in part because I don't like the number of additional edges it adds to the graph - here, he doesn't sweep those edges under the rug; they are right there where you can see them.


I learned TDD during an early part of the Look ma, no hands! era, which has left me somewhat prone to question whether the ritual is providing the value, or if instead it is the prior experience of the operator that should get the credit for the designs we see on the page. 

Let us imagine, for a moment, that we were to implement the first test that he shows, and furthermore we were going to write that code "assert first".  Our initial attempt might look like the implementation below

Of course, this code does not compile.  Let's try "just enough code to compile".  So we need a Product with a constructor, and an Alert interface with a send method, and then to create a product instance and an alert mock.  In the interest of a clean presentation, I'll take advantage of the fact that Java allows me to have more than one implementation within the same source file.

What more do we need to make this test pass? Not very much

This is not, by any means, completed code - there is certainly refactoring to be done, and duplication to remove, and all the good stuff.

The text of Jason Gorman's test, and his presentation of it, lead me to believe that the test was not supporting refactoring.  What I believe happened instead is that Gorman had a modular design in his mind, and he typed it in.  The behavior of ReorderLevel, in particular, has no motivation at all in the statement of the problem - it is an element introduced in the test because we've already decided that there should be a boundary there.

This isn't a criticism of the design, but rather the notion that TDD can lead to modular code.  I'm not seeing leading here; but something more akin to the fact that TDD happened to be near by while outside-in modular design was happening.

The assignment of credit is completely suspect.

The second thing that caught my eye is expressed within this assert method itself.  The presentation at the end of the exercise shows us tests, and code coverage, and contract tests, all great stuff... but after all that is done, we still ended up with a "data class" that doesn't implement its own version of `equals`.  The mockito verification shown in the first test is an identity check - the test asks did this instance handle move through the code from where we put it in to where we took it out?

Product, here, is a data structure that holds information that was copied over the web.  The notion that we need a specific instance of it are suspect.

Is it an oversight?  I've been thinking on that, and my current position is that it is indistinguishable from an oversight.  We're looking at a snapshot of a toy project, so it's very hard to know from the outside whether or not `Product` should override the behavior of `Object.equals` -- but there's no obvious point in the demonstration that you can point to and say "if that were a mistake, we would catch it here".

In other words: what kinds of design mistakes are the mistake detectors detecting?  Do we actually make those mistakes?  How long must we wait until the savings from detecting the mistakes offsets the cost of implementing and maintaining the mistake detectors?

Myself, I like designing from the outside in.  I have a lot of respect for endotesting. But at the end of the day, I find that the demonstrated technique doesn't lead to a modular design, but rather locks in a specific modular design.  And that makes me question whether the benefits of locking in offset the costs of maintaining the tests.