Friday, January 6, 2023

Schools of Test Driven Development

There are two schools of test driven development.  There is the school that believes that there are two schools; and there is the school that believes that there is only one school.


The school of "only one school" is correct.

As far as I can tell, "Chicago School" is an innovation introduced by Jason Gorman in late 2010.  Possibly this is a nod to the fact that Object Mentor was based in Evanston, Illinois.

"Chicago School" appears to be synonymous with "Detroit School", a term proposed by Michael Feathers in early 2009.  Detroit, here, because the Chrysler Comprehensive Compensation team was based in Detroit; and the lineage of the "classical" TDD practices could be traced there.

Feathers brought with that proposal the original London School label, for practices more readily influenced by innovations at Connextra and London Extreme Tuesday Club.

Feathers was proposing the "school" labels, because he was at that time finding that the "Mockist"  and "Classicist" labels were not satisfactory.

The notion that we have two different schools of TDDer comes from Martin Fowler in 2005.

This is the point where things get muddled - to wit, was Fowler correct to describe these schools as doing TDD differently, or are they instead applying the same TDD practices to a different style of code design?

 Steve Freeman, writing in 2011, offered this observation

...There are some differences between us and Kent. From what I remember of implementation patterns, in his tradition, the emphasis is on the classes. Interfaces are secondary to make things a little more flexible (hence the 'I' prefix). In our world, the interface is what matters and classes are somewhere to put the implementation.

In particular, there seems (in the London tradition) to be an emphasis on the interfaces between the test subject and its collaborators.

I've been recently reviewing Wirfs-Brock/Wilkerson's description of the Client/Server model.

Any object can act as either a client or a server at any given time

As far as I can tell, everybody has always been in agreement about how to design tests that evaluate whether the test subject implements its server responsibilities correctly.

But for the test subject's client responsibilities?  Well, you either ignore them, or you introduce new server responsibilities to act as a proxy measure for the client responsibilities (reducing the problem to one we have already solved), or you measure the client responsibilities directly.

Mackinnon 2008 reported that extending the test subject with testing responsibilities was "compromising too far", and John Nolan had therefore challenged the team to seek other approaches.

Reflecting in 2010, Steve Freeman observed: 

The underlying issue, which I only really understood while writing the book, is that there are different approaches to OO. Ours comes under what Ralph Johnson calls the "mystical" school of OO, which is that it's all about messages. If you don't follow this, then much of what we talk about doesn't work well. 

Similarly, Nat Pryce:

There are different ways of designing how a system is organised into modules and the interfaces between those modules....  Mock Objects are designed for test-driving code that is modularised into objects that communicate by "tell, don't ask" style message passing.

My take, today, is still in alignment with the mockists: the TDD of the London school is the same TDD as everybody else: controlling the gap between decision and feedback, test first with red green refactor, and so on.

The object designs are different, and so we also see differences in the test design - because tests should be fit for purpose.