Sunday, November 6, 2016

A short lesson in unit testing.

I've been reviewing Uncle Bob's essay on Dijkstra's Algorithm, and Greg Young's 2011 Probability Kata.  From this, I've taken away a few sharp lessons.

Test Specifications


It absolutely doesn't matter how you write your test specifications. Ideally, you'd like to maintain flexibility, so the notion of having then in a language agnostic representation where anybody can work on them has merit.
If you are seeing references to your current domain model objects in your test specifications, you are way too tightly coupled.

API

  • Your test specifications don't talk to your implementations
  • Your test specifications don't talk to your implemented apis
  • Your test specifications talk to your TEST api
More carefully, your test api interprets the test specifications, and adapts them to the system under test.


My current interpretation is that the test api should consist of three stages -- Given/When/Then is a pretty good approximation.  We need to load the system under test in the appropriate initial state (Given), we need to exchange a message with it (When), and we need to run checks on the message that we receive in the exchange (Then).

Note that last point - we don't want to be poking around in the internals of anything we are testing; we want any probes to be looking a stable interpretation of the api we are testing.

Evolution over time

Worth repeating for emphasis - we should not need to change our checks when the implementation that we are measuring changes.  We're crossing a process boundary, so the messages connecting the test implementation to the subject should be DTOs in both directions.  For any given implementation, the DTOs should be stable: they are effectively part of the API of the implementation.

But the test design should allow us to swap out one implementation for another with an entirely different API.  That will mean finding the new way to load instances of the implementation, and a new adapter of the messages from the specifications language to the implementations API.  The adapter also brings the api language back to the language of the specification, and the checks do not change.

Think HTTP: the web server offers a stable interface against which a specification can be written.  The implementation of the resources on the web server adapt the incoming messages, creating new representations that can be exchanged with the the domain's API.  The domain responds to the web server, the web server transforms the DTO from the domain model to that which conforms to the specification, and the result is sent back to the client.

So when we are looking at the execution of our specifications, we should be able to find a seam; a point in the execution where the specified inputs (Given, When) are passed to the Test API, and a response is returned that can be checked against Then.



Here in Uncle Bob's code, makePathFinder serves as the seam, although it is an imperfect one. (Editorial note: which is fine. Uncle Bob's kata was about test specifications leading to Dijkstra's algorithm. My kata is about test evolution. Horses for courses.).

The main flaw here is the appearance of the PathFinder type, which is taken from the implementation namespace, rather than the specification namespace. We can't see that from the signature alone, but it becomes clear if we drill deeper into the test implementation

When we see that this is a type built from inputs, rather than from outputs, which is a really big hint that something has gone south. Only messages should be crossing the boundary from the solution to the test, but PathFinder is a type with behavior. The abstraction barrier is leaking. We shouldn't need to change our specification execution when the name of the target changes; that's a violation of encapsulation -- we should be able to replace the connection to the real implementation with a lookup table without needing to make any changes in the execution of the specifications.


Sure, you could sexy it up with an immutable message type, or even a EnumMap to ensure that you don't get confused. The point is to keep in mind that you are checking the specification against plain old data - the response cross the barrier from the system under test to the test, and the test api aligns that response with the representations expected by your speficication.