The Boston Software Crafters meetup this week featured a variant on the Mars Rover exercise. I decided to take another swing at it on my own; Paul Reilly had introduced in the problem a clean notation for obstacles, and I wanted to try my hand at introducing that innovation at a late stage.
As is common in "Classic TDD", all of the usual I/O and network concerns are abstracted away in the pre-game, leaving only raw calculation and logic; given this input, produce the corresponding output.
My preferred approach it to work with tests that are representations of the complete behavior, and then introduce implementation elements as I need them. Another way of saying the same thing: rather than trying to guess where my "units" should be right out of the gate, I guess only at a single candidate interface that allows me to describe the problem in a way that the machine can understand, and then flesh out further details as additional details become clear.
So for this exercise session, I began by creating a test checklist. The "acceptance" tests provided by Paul went to the top of the list; these help to drive questions like "what should the API look like". Rather than diving directly into trying to solve those cases, I took the time to add to the checklist a bunch of simple behaviors to write - what should the solution output when the input is an empty program? or if the program contains only turn commands? or if the program describes an orbit around the origin (which, in this exercise, meant worming into each corner of the finite grid).
I came up with the empty program and the turns right away; the orbit of the origin occurred to me later when I was thinking about how to properly test the relation between move and turn. If the problem had included cases of rovers starting away from the origin, I might not have addressed the coordinate wrapping effects quite so soon.
Along the way, one "extra" acceptance test occurred to me.
The empty program obviously had the simplest behavior, and the reporting format was separated from the calculation right away. After passing a test with two left turns, there was a period of refactoring to introduce the notion of working through program instructions (which are just primitive strings at this point), the remaining lefts were trivial to add, and rights were painless because they are simply a reflection of the lefts.
It happened that the handling of right and lefts suggested a similar pattern for handing moves in X and Y, so those four tests were fine with only one bobble where I had inadvertently transposed X and Y. And then TA-DA; put in all of the acceptance tests that ignore the question of obstacles, and they are all passing.
At this point, what do things look like? I've about 20 "high level" tests, a representation of the solution that the machine understands, but no particular fondness for the human properties of the solution - the design is easy to work in (primarily, I suspect, because it is still familiar), but it doesn't describe the domain very well, or support other alternative interfaces or problems. In short, its a function monolith.
I had expected the obstacle to add a challenge; but a quirk in the Java language actually made the API change I needed trivial; with that, adding a bit more to the monolith solved the whole mess.
In short, the code I've got most resembles the Gilded Rose; not quite such a rats' nest, but that's more a reflection of the fact that the rules I'm working in are a lot more regular.
The good news is that I have test coverage; so I've no concerns that making changes to the implementation.
The disappointing news - the modules that I want to have aren't really teased out. Borrowing from the language of Parnas, the tests as written span quite a few decisions that might change later. As a consequence, the tests as written are brittle.
For instance, we might imagine that there had been a misunderstanding of the output message format; our code reports x:y:orientation, but the requirement is orientation:x:y. This isn't a difficult fix to make -- in my implementation, this is one line of code to change. But the tests as written are all tightly coupled to the wrong output representation, so those tests are all going to require some modification to reflect the changed requirements.
This can certainly be fixed - the scope of work is small, we've caught the problem in time, and so on.
But what I do want to emphasize is that this is, in a sense, extra work. The design that I want didn't organically appear during the refactoring steps.
Why is that? My thinking is that I was "optimizing" my design for wall clock time of getting all of the constraints in place; having the constraints means that other design work is safe. But if I had caught the design error sooner, I would have taken slightly longer to finish implementing the constraints, but would also have less work to do to be "finished", in the sense of having good bulkheads to guard against future change.
Short exercises don't, in my experience, express changing requirements very well. Can we decouple the logic from the input representation? from the output representation? Can we decouple the algorithm from the data representations? Is it even clear where these boundaries are, so that we can approach the problem 6 months later?
And most importantly - do you effortlessly produce these boundaries on your first pass through the code?
No comments:
Post a Comment