In my recent TDD practice, I've been continuing to explore the implications of edge to edge tests.
The core idea being, is this - if we design the constraints on our system at the edge, then we maximize the degrees of freedom we have in our design.
Within a single design session, this works just fine. My demonstration of the Mars Rover kata limits its assumptions to the edge of the system, and the initial test is made to pass by simply removing the duplication.
The advantage of such a test is that it is resilient to changes in the design. You can change the arrangement of the internals of the test subject, and the test itself remains relevant.
The disadvantage of such a test is that it is not resilient to changes in the requirements.
It's common in TDD demonstrations to work with a fixed set of constraints throughout the design session. Yes, we tend to introduce the constraints in increments, but taken as a set they tend to be consistent.
The Golden Master approach works just fine under those conditions; we can extend our transcript with descriptions of extensions, and then amend the test subject to match.
But a change in behavior? And suddenly an opaque comparison to the Golden Master fails, and we have to discard all of the bath water in addition to the baby.
We might describe the problem this way: the edge to edge test spans many different behaviors, and a change to a single behavior in the system may butterfly all the way out to the observable behavior.
In other words, the same property that makes the test useful when refactoring acts against us when we introduce a modification to the behavior.
One way to side step this is to take as given that a new behavior means a new test subject. We'll flesh out a element from scratch, using the refactoring task in the TDD cycle to absorb our previous work into the new solution.
I haven't learned that this is particularly convenient for consumers. "Please update your dependency to the latest version of my library and also change the name you use to call it" isn't a message I expect to be well received by maintainers that haven't already introduced design affordances for this sort of change.
So what else? How do we arrange our tests so that we don't need to start from scratch each time we get a request for a breaking change?
Recently, I happened to be thinking about this check in one of my tests.
When this check fails, we "know" that there is a bug in the test subject. But why do we know that?
If you squint a bit, you might realize that we aren't really testing the subject in isolation, but rather whether or not the behavior of the subject is consistent with these other elements that we have high confidence in. "When you hear hoof beats, expect horses not zebras".
Kent Beck describes similar ideas in his discussion of unit tests.
There is a sort of transitive assertion that we can make: if the behavior of the subject is consistent with some other behavior, and we are confident that the other behavior is correct, then we can assume the behavior of the test subject is correct.
What this affords is that we can take the edge to edge test and express the desired behavior as a composition of other smaller behaviors that we are confident in. The Golden Master can be dynamically generated from the behavior of the smaller elements.
Of course, the confidence in those smaller elements comes from having tests of their own, verifying that those behaviors are consistent with simpler, more trusted elements. It's turtles all the way down.
In this sort of design, the smaller components in the system act as bulkheads for change.
I feel that I should call out the fact that some care is required in the description of the checks we create in this style. We should not be trying to verify that the larger component is implemented using some smaller component, but only that its behavior is consistent with that of the smaller component.
With its recent launch, Slots Empire certainly one of the|is amongst the|is likely considered one of the} greatest cell casino sites and it’s progressively rising in every section of its assortment. There 우리카지노 quantity of} things|are some things} to appreciate about the web site, such as its crypto payment choices like Bitcoin and Ethereum. The web site has a good selection of categorized video games that include seven blackjack variants and eight video poker choices. It additionally entices gamers that appreciate basic slots and poker video games. Classic slot titles include Farming Futures, Mermaid’s Quest and Treasure Trail.
ReplyDelete