Alright, chapter one, and we're introduced to the process of writing the first test
... we imagine the perfect interface for our operation. We are telling ourselves a story about how the operation will look from the outside. Our story won't always come true, but it's better to start form the best-possible application program interface (API) and work backward
Which I find interesting on two fronts - first, because where does this understanding of perfection come from? and second, because immediately after his first example he itemizes a number of ways in which the provided interface is unsatisfactory.
Another aspect to this example that I find unsatisfactory is that the presentation is rather in media res. Beck is essentially working from the WyCash context - we already have a portfolio management system, and there's a gap in the middle of that system that we want to fill in. Or if you prefer, a system where the simpler calculations are in place, but we want to refactor those calculations into an isolated module so that we might more easy make the new changes we want.
So we might imagine that we already have some code somewhere that knows how to multiple `dollars(5)` by `exchangeRate(2)`, and so on, and what we are trying to do is create a better replacement for that code.
I'm not entirely satisfied with this initial interface for this case, however - it rather skips past the parts of the design where we take the information in the form we have it and express it in the form that we need it. In this case, we're looking at the production of a report, where the inputs are some transient representation of our portfolio position, then the outputs are some transient representation of the report.
In effect, `new Dollar` is a lousy way to begin, because the `Dollar` class doesn't exist yet, so the code we have can't possibly be passing information to us that way.
I don't think it matters particularly much, in the sense that I don't think that the quality of the design you achieve in the end is particularly sensitive to where you start in the solution. And certainly there are a number of reasons that you might prefer to begin by exploring "how do we do the useful work in memory" before addressing the question of how we get the information we need to the places we need it.
Another quibble I have about the test example (although it took me many years to recognize it) is that we aren't doing a particularly good job about distinguishing between the measurement and the application of the "decision rule" that tells us if the measured value is satisfactory.
Moving on: an important lesson
We want the bar to go green as quickly as possible
The green task should be evaluated in wall clock time - we want less of that, because we want the checks in place when we are making mistakes.
A riddle - should TDD have a bias towards designs that produce quick greens, and if so is that a good thing?
(Isolating the measurement affords really quick greens via guard clauses and early returns. I'll have to think more on that.)
Once again, I notice that Kent's example racks up four compile errors before he starts working toward the green bar, where "nanocycle TDD" would encourage prioritizing the green bar over finishing the test. I'm not a big fan of nanocycle, myself, so I like having this evidence in hand when challenged.
We need to generalize before we move on.
You can call it generalizing, or you can call it removing duplication, but please notice that Kent is encouraging that we perform cleanup of the implementation before we introduce another failing test.
(There is, of course, room to argue with some of the labels being used - generalizing can change behaviors that we aren't constraining yet, so is it really "refactoring"? Beck and Fowler aren't on the same page here - I think Kent addresses this in a later chapter.)
By eliminating duplication before we go on to the next test, we maximize our chances of being able to get the next test running with one and only one change.
Ten years later, this same advice
for each desired change, make the change easy (warning: this may be hard), then make the easy change
How much work we have to do to make the change easy is likely an interesting form of feedback about the design you start with; if it's often a lot of work, maybe our design heuristics need work.
The final part of the chapter is an important lesson on duplication that I don't think really got the traction that it should have. Here, we have a function that returns 10 -- but there are lots of functions in the world that return 10, so we should do the work to make it clear why this specific function returns 10.
(Heh: only now do I notice that we don't actually reach the multi-currency bits hinted at by the chapter title.)
No comments:
Post a Comment