Friday, December 22, 2017

Injection: a refactoring exercise.

Yesterday, I needed to extend a program that had a broken design.

The existing program works fine; but it has hard wired into it an identifier for a specific database host.  That host name is used during execution to create the URL string that is passed to JDBC, by merging the hard coded host with a configured database name.

An additional constraint is that I need to have the old and new versions of the program running in parallel, in the same environment.  We're going to run the two reports in parallel for a time, and then decommission the legacy.

So, how to dig my way out?  Here's the outline of how I did it....

As is often the case, I rely on Parnas -- the legacy implementation is the result of a decision that I want to change. Therefore, I begin by creating a boundary around that decision.

In this case, the code in question was computing a new string from the hard coded literal and a member variable.  So I applied an "extract method" refactoring to give me a member function that promised to provide the string that I needed.  The method simply returned the calculated string, as before.

I then created a new type; the configuration model already included DatabaseName as an explicit concept,  but it didn't have an understanding for the URL.  So I introduced a DatabaseUrl type, that was simply an alias for a string.

I then inserted that type into the middle of the method -- so we compute the string, use the string to create a new DatabaseUrl, then use the DatabaseUrl to yield the string to return.

Next, I apply extract method again, creating a function that would convert the DatabaseName to the DatabaseUrl.

Next, I take DatabaseUrl, which had been a local variable, and make it a private member of the object; it now has the same lifetime and scope as DatabaseName, the only different is that it is being initialized at a different point in the program.

Next, I copy the logic to initialize the DatabaseURL to the method where the DatabaseName was initialized.  I now have a redundant copy, which I can remove -- the query that produces the string representation of the url when I need it is just a toString() on the member value

At this point, the only place where the DatabaseName is used in in the initializer, so I can demote that to a local variable in the method, and inline it away.

I don't think we should be using the old method on the API any more, so I mark it as deprecated.

Reviewing at this point -- through a sequence of refactorings, I've managed to extend the existing API, adding the new method I need to it, while at the same time ensure that my existing tests are exercising that code (because the implementation of the old API feeds into the new one).

The legacy implementation includes a factory, which when passed the configuration (captured within a Properties object) returns the interpreted configuration.  So the new piece I need is a factory with the same signature, that understands the new configuration.

But before I can think about creating a new factory implementation, I have to address a second problem -- I was sloppy ensuring the integrity of the composition root.  The factory I need to replace is allocated behind two different default constructors.

So I need to perform a second round  of refactoring.  The member variable for the factory that I need to replace is too specific, as it names the implementation directly.  So the spelling of that type needs to be changed to the interface.  Fortunately, the interface I need is already available, so I can just replace it.

Then we delete the assignment at the member variable declaration.  At this point, everything goes red, because we aren't assigning a final field.  Next, we fix the compilation problem by allocating a new instance of the factory in the constructor body.

Next, we work our way back up the stack to the composition root.  There are a couple possibilities; you can either develop a completely new path from the root to the assignment so that the factory can be passed along, and then with everything wired up remove the duplication, or you can create new signatures, but play criss cross -- the existing implementation gets a new signature, and a new method appears which creates a default factory and then routes to the real implementation. 

I prefer the latter course; you only really need to preserve the signatures that are part of the public API.  Anything that is private to the module you can simply change, watching for local compile errors.

At the end of the game, I have the starting position I'm waiting for: me legacy implementation, with the creation of the default factory at the composition root.

Now I'm ready to make the easy change - implement the new factory, clone the composition root, and replace the legacy factory with the new implementation.

Sunday, November 5, 2017

TDD: Mars Rover

I took some time this weekend to practice the Mars Rover kata again.

My focus in the exercise was duplication - can we proceed from hard coded outputs, and arrive at a viable implementation simply by removing the duplication that must be present?  Of course, the answer is yes -- duplication is duplication; you can remove it any time you like.

I shan't walk through the process here, it should be reasonably straight forward to follow along from the commit history.

With just the one example, I needed almost no investment in the test harness at all; all of my time was spent working in my production code.  Setting aside the one error that forced me to revert, practically all of the work was in the green bar.

My guess is that the process went as smoothly as it did because my previous experiments in the kata act as spike solutions -- I'm not discovering how to achieve the result at the same time that I'm discovering how to remove the duplication.  Removing the duplication is comfortable at each step, because I can see the code evolving towards a design that is already familiar.

This code is tightly coupled to the data representations that we were given for this problem.  Do we need to invest now in the abstractions that would make it easier to change data representations later?  YAGNI says, no, not really; and that's certainly true for a kata in a limited time box.  Make the next change easy is much easier after you know what the next change is going to be.

Kent Beck talked a bit about this in his discussion of triangulation
Triangulation feels funny to me.  I use it only when I am completely unsure of how to refactor.  If I can see how to eliminate duplication between code and tests and create the general solution, then I just do it.  Why would I need to write another test to give me permission to write what I probably could have written in the first place.
Smaller bites makes sense, I think, when the way forward isn't clear.  But the Feynman algorithm also works. So what does that teach us?

Horses, of courses; but I find that answer unsatisfying.

I think it's closer to true to recognize that the tests aren't driving anything; they are instruments that measure whether or not our current implementation is satisfactory over some range of examples.  In particular, they can give us awareness of whether a change we've made has certain classes of unintended side effects.  They inform us, a little bit, about a possible interface.

With the instrumentation in place, we apply changes to the code in increments; not for the design, but because the short feedback cycle makes it easier to localize the errors we introduce along the way.

There are quite a few kata that progress toward a single implementation in increments, what I've come to think of as extending a particular set of requirements.  I think we need more examples of a different sort, where we have to make something new of code originally written for another purpose.

Sunday, October 29, 2017

Aggregate Semantics

First, we need an abstraction for encapsulating references within the model.
This past week, I had a brief, on-line discussion with Yves Reynhout about repositories.  He pointed out that there is a difference between persistence oriented repositories and collection oriented repositories.

A REPOSITORY represents all objects of a certain type as a conceptual set.  It acts like a collection....
(Repositories) present clients with a simple model for obtaining persistent objects and managing their life cycle.
The invocation of the repository might be as simple as



The key insight is that this is just a semantic; behind the interface, the domain model is free to choose its implementation.



We're not really -- at least, not necessarily -- "invoking a method on an entity in the model".  What we are actually doing in this design is capturing two key pieces of information

  • Which document in our data model are we trying to modify
  • Which behavior in our domain model are we modifying that document with
In the usual pattern, the underlying implementation usually looks like "an object"; we acquire some state from the data store, wrap it with domain methods, let the application invoke one, extract the state back out of the object, and write it to the store.

This pattern bothered me for a long time -- we shouldn't "need" getters and setters on the domain objects, yet somehow we need to get the current state into and out of them.  Furthermore, we've got this weird two phase commit thing going on, where we are mutating an entity in memory and the book of record.

A different analogy to consider when updating the model is the idea that we are taking a command -- sent to the domain model -- and transforming it into a command sent to the data.  In other words, we pass the behavior that we want to the data model, saying "update your state according to these business rules".

Tell, don't ask.





Sunday, October 22, 2017

Test Driven Diamonds

I've been reviewing, and ruminating upon, a bunch of experiences folks have reported with the diamond kata.  A few ideas finally kicked free.

Overview

Seb noted this progression in the tests
The usual approach is to start with a test for the simple case where the diamond consists of just a single ‘A’
The next test is usually for a proper diamond consisting of ‘A’ and ‘B’; It’s easy enough to get this to pass by hardcoding the result.
We move on to the letter ‘C’: The code is now screaming for us to refactor it, but to keep all the tests passing most people try to solve the entire problem at once.
Alistair expressed the same conclusion more precisely
all the work that they did so far is of no use to them whatsoever when they hit ‘C’, because that’s where all the complexity is. In essence, they have to solve the entire problem, and throw away masses of code, when they hit ‘C’

George's answer is "back up and work in smaller steps", but I don't find that answer particularly satisfactory.  I believe that there's something deeper going on.

Background

Long ago, I was trying to demonstrate simplest thing that could possibly work on the testdrivendevelopment discussion group.  I had offered this demonstration:



And Kent Beck in turn replied 
 Do you have some refactoring to do first?
Sadly, I wasn't ready for that particular insight that day.  But I come back to it from time to time, and am starting to feel like I'm beginning to appreciate the shape of it.

On Unit Tests

Unit tests are a check that the our latest change didn't break anything -- the behavior specifications that were satisfied before the change are still satisfied after the change.

Big changes can be decomposed into many small changes.  So if we are working in small increments, we are going to be running the tests a lot.  Complex behaviors can be expressed as many simple behaviors; so we expect to have a lot of tests that we run a lot.

This is going to suck unless our test suite has properties that make it tolerable; we want the tests to be reliable, we want the interruption of running the tests to be as small as possible -- we're running the automated checks to support our real work.

One way to ensure that the interruption is brief is to run the tests in parallel.  Achieving reliability with concurrently executing tests introduces some constraints - the behaviors of the tests need to be isolated from one another.  Any assets that are shared by concurrently executing tests have to be immutable (otherwise you are vulnerable to races).

This constraint effectively means that the automated checks are pure functions; given-when-then describes the inputs: a deterministic representation of the initial configuration of the system under test, a message, a representation of the specification to be satisfied, and the output is a simple boolean.

This test function is in turn composed of some test boiler plate and a call to some function provided by the system under test.

Implications

... since every piece of matter in the Universe is in some way affected by every other piece of matter in the Universe, it is in theory possible to extrapolate the whole of creation — every sun, every planet, their orbits, their composition and their economic and social history from, say, one small piece of fairy cake.
If we are, in fact, testing a function, then all of the variation in outputs must be encoded into the inputs.  That's the duplication Beck was describing.

Beck's ability to discover duplication has been remarked on elsewhere, and I suspect that this principle is a big part of it.  Much of the duplication is implicit -- you need to dig to make the implicit explicit.

Example

To show what I mean, let's take a look at the production code for the Moose team after the first passing test.


We've got a passing test, so before we proceed: where is the duplication?  One bit of duplication is that the "A" that appears in the output is derived from the `A` in the input.  Also, the number of lines in the output is derived from the input,  as is the relative position of the A in that output.


There's nothing magic about the third test, or even the second, that removes the shackles - you can attack the duplication as soon as you have a green bar.  All of the essential complexity of the space is included in the first problem, so you can immediately engage the design dynamo and solve the entire problem.

So when do we need more than one test?  I think of the bowling game, where the nature of the problem space allows us to introduce the rules in a gradual way.  The sequencing of the tests in the bowling game allows us to introduce the complexity of the domain in a controlled fashion - marks, and the complexities of extra throws in the final frame are deferred.

But it doesn't have to be that way - the mars rover kata gives you a huge slice of navigation complexity in the first example; once you have eliminated the duplication between the input and the output, you are done with navigation, and can introduce some other complexity (wrapping, or collisions).

Of course, if the chunk of complexity is large, you can easily get stuck.   Uncle Bob touched on this in his write up of the word wrap kata; passing the next test can be really difficult if you introduce too many new ideas at once.  Using a test order that introduces new ideas one at a time is an effective way to build up a solution.  Being willing to suspend complex work to find a more manageable slice is an important skill.

And, in truth, I did the same thing in the rover; essentially pushing as far as I could with the acceptance test alone, then suspending that work in favor of a new test isolating the next behavior I needed, refactoring my solution into my acceptance work, and continuing on until I next got stuck.

In an exercise dedicated to demonstrating how TDD works, that's an important property for a problem to have: a rule set with multiple elements that can be introduced into the solution separately.

Of course, that's not the only desirable property; you also need an exercise that can be explained in a single page, one in which you can make satisfactory progress with useful lessons in a single session.  That introduces a pressure to keep things simple.

But there's a trap: if you strip down the problem to a single rule (like the diamond, or Fibonacci), then you lose the motivation for the increments - the number of test cases you introduce before actually solving the problem becomes arbitrary.

On the other hand, if simply returning a hard coded value is good enough to pass all of our acceptance tests, then why are we investing design capital in refactoring the method?  The details clearly aren't all that important to the success of the project yet.

My mental model right now is something along the ideas of a budget; each requirement for a new variation in outputs gives us more budget to actually solve the problem, rather than just half assing it.  For a problem like Fibonacci, teasing out the duplication is easy, so we don't need many iterations of capital before we just solve the damn thing.  Teasing the duplication out of diamond is harder, so we need a commensurately larger budget to justify the investment.

Increments vs Iterations



I found the approaches that used property based testing to be absolutely fascinating.  For quite a while I was convinced that they were wrong; but my current classification of them is that they are alien.

In the incremental approach, we target a specific example which is complete, but not general.  Because the example is complete, it necessarily satisfies all invariants that are common to all possible outputs.  Any acceptance test that only needs the invariant satisfied will be able to get buy with this single solution indefinitely.

The property based approach seems to explore the invariant generally, rather than targeting a specific example.  For diamond
  • The output is a square
  • With sides of odd length
  • With the correct length
  • With top-bottom symmetry
  • With left-right symmetry
  • With the correct positions occupied by symbols
  • With the correct positions occupied by the correct symbols
This encourages a different way of investing in the solution - which properties do we need right now?  For instance, if we are working on layout, then getting the dimensions right is urgent (so that we can properly arrange to fit things together), but we may be able to use a placeholder for the contents.

For the mars rover kata, such an approach might look like paying attention first to the shape of the output; that each row of output describes a position and an orientation, perhaps a rule that there are no collisions in the grid, that all of the positions are located within the input grid, that there is an output row for each input row, and so on.

I don't think property based tests really help in this kata.  But I think they become very interesting if the requirements change.  To offer a naive example, suppose we complete the implementation, and the acceptance tests are passing, and then during the playback the customer recognizes that the requirements were created incorrectly.  The correct representation of the diamonds should be rotated 90 degrees



With a property driven test suite, you get to keep a lot of the tests; you retire the ones that conflict with the corrected requirements, and resume iterating until the new requirements are satisfied?  With the example based? approach, the entire test suite gets invalidated in one go -- except for the one trivial example where the changed requirements didn't change the outcome.

Actually, that's kind of interesting on its own.  The trivial example allows you to fully express, by means of aggressively eliminating the duplication, a complete solution to the diamond problem in one go.  But what it doesn't give you is an artifact that documents for your successor that the original orientation of the diamond, rather than its successor, is the "correct" one.

The more complicated example makes it easier to distinguish which orientation is the correct one, but is more vulnerable to changing requirements.

I don't have an answer, just the realization that I have a question.

Wednesday, October 4, 2017

Value Objects, Events, and Representations

Nick Chamberlain, writing at BuildPlease:
Domain Models are meant to change, so changing the behavior of a Value Object in the Domain Model shouldn’t, then, affect the history of things that have happened in the business before this change.
 Absolutely right.

The Domain Model is mutable, therefore having Events take dependencies on the Domain Model means that Events must become mutable - which they are not.
Arrrg.

As your domain model evolves, you may add new invariants to be checked, or change existing ones on the Value Object that you’re serializing to the Event Store.
Fundamentally, what Chamberlain is suggesting here is that you may want to replace your existing model with another that enforces stricter post conditions.

That's a backwards incompatible change - in semantic versioning, that would call for a major version change.  You want to be really careful about how you manage that, and you want to be certain that you design your solution so that the costs of doing that are in the right place.
If the Domain Model was mutable, we’d also need versioning it - having classes like RoastDate_v2… this doesn’t match the Ubiquitous Language.
Right - so that's not the right way to version the domain model.  The right way is to version the namespace.  The new post condition introduces a new contract, both at the point of change, and also bubbling up the hierarchy as necessary.  New implementations for those new contracts are introduced.
The composition root chooses the new implementations as it wires everything together.

"Events are immutable" is a design constraint, not an excuse to bathwater the baby.

Yes, we may need to evolve our understanding of the domain in the future that is incompatible with the data artifacts that we created in the past.  From this, it follows that we need to be thinking about this as a failure mode; how do we want that failure to manifest? what options can we support for recovery?

Presumably we want the system to fail safely; that probably means that we want a fail on load, an alert to a human operator, and some kind of compensating action that the operator can take to correct the fault and restore service.

For instance, perhaps the right kind of compensating action is correcting entries.  If your history is tracked an an append only collection of immutable events, then the operator will need to append the compensating entries to the live end of the stream.  So your history processing will need to be aware of that requirement.

Another possibility would be to copy the existing history into a new stream, fixing the fault as you go.  This simplifies the processing of the events within the model, but introduces additional complexity in discovering the correct stream to load.

We might also want to give some thought to the impact of a failure; should it clobber all use cases that touch the broken stream?  That maximizes your chance of discovering the problem.  On the other hand, being more explicit about how data is loaded for each use case will allow you to continue to operate outside of the directly impacted areas.

My hunch is that investing early design capital to get recovery right will also ease the constraints on how we represent data within the domain model.  At the boundaries, the events are just bytes; but within the domain model, where we are describing the changes in the business, the interface of events is described in the ubiquitous language, not in primitives.

ps: Versioning in an Event Sourced System (Young, 2017) is an important read when you are thinking about messages that evolve as your requirements change.

Monday, October 2, 2017

A not so simple trick

Pawel Pacana
Event Sourcing is like having two methods when previously there was one.
 Me
Noooooooo
 In fairness, the literature is a mess.  Let's see what we can do about separating out the different ideas.

Data Models

Let's consider a naive trade book as an example; it's responsible for matching sell orders and buy orders when the order prices match.  So the "invariant", such as it is, is that we never are holding unmatched buy orders and sell orders at the same price.

Let's suppose we get three sell orders; two offering to sell 100 units at $200, and between them a third offer to sell 75 units at $201.  At that point in the action, the data model might be represented this way.


The next order wants to buy 150 units at $200, and our matching algorithm goes to work. The resulting position might have this representation.


And after another buy order arrives, we might see a representation like


After each order, we can represent the current state of the trade book as a document.

There is an alternative representation of the trade book; rather than documenting the outcome of the changes, we document the changes themselves. Our original document might be represented this way


Then, when the buy order arrives, you could represent the state this way


But in our imaginary trade book business, matches are important enough that they should be explicit, rather than implicit; so instead, we would more likely see this representation


And then, after the second buy order arrives, we might see


The two different models both suffice to describe equivalent states of the same entity. There are different trade offs involved, but both approaches provide equivalent answers to the question "what is the state of the trade book right now".

Domain Models

We can wrap either of these representations into a domain model.

In either case, the core interface that the application interacts with is unchanged -- the application doesn't need to know anything about how the underlying state is represented.  It just needs to know how to communicate changes to the model.  Thus, the classes playing the role of the "aggregate root" have the same exposed surface.  It might look like


The underlying implementations of the trade book entity is effectively the same. Using a document based representation, we would see an outline like


Each time an order is placed, the domain model updates the local copy of the document. We get the same shape if we use the event backed approach...


Same pattern, same shape. When you introduce the idea of persistence -- repositories, and copying the in memory representation of the data to a durable store, the analogs between the two hold. The document representation fits well with writing the state to a document store, and can of course be used to update a relational model; perhaps with assistance of an ORM. But you could just as easily copy the event "document" into the store, or use the ORM to transform the collection of events into some relational form. It's just data at that point.

There are some advantages to the event representation when copying state to the durable store. Because the events are immutable, you don't need to even evaluate whether the original entries in the list have changed. You don't have to PUT the entire history; you can simply PATCH the durable store with the updates. These are optimizations, but they don't change the core of the patterns in any way.

Projections

Event histories have an important property, thanks to their append only nature -- updates are non-destructive.  You can create from the event history any document representation you like; you only need to have an understanding of how to represent a history of no events, and how each event type in turn causes the document representation to change.


What we have here is effectively a state machine; you load the start state and then replay each of the state transitions to determine the final state.

This is a natural approach to take when trying to produce "read models", optimized for a particular search pattern.  Load the empty document, replay the available events into it, cache the result, obtain more events, replay those, cache this new result, and so on.  If the output representation is lost or corrupted, just discard it and replay the complete history of the model.

There are three important facets of these projections to pay attention to

First, the motivation for the projections is that they serve queries much more efficiently than trying to work with the event history directly.

Second, that because replaying an entire event history can be time consuming, the ability to resume the projection from a previously cached state is a productivity win.

Third, that a bit of latency in the read use case is typically acceptable, because there is no risk that querying stale data will corrupt the domain.

The Tangle

Most non-trivial domain models require some queries when updating the model.  For instance, when we are processing an order, we need to know which previously unmatched orders were posted with a matching price.  If the domain requires first in first out processing, then you need the earliest unmatched order.

Since projections are much better for query use cases than the raw event stream, the actual implementation of our event backed model probably cheats by first creating a local copy of a suitable projection, and then using that to manage the queries


That "solves" the problem of trying to use the event history to support queries directly, but it leads directly into the second issue listed above; processing the entire event history on demand is relatively expensive. You'd much prefer to use a cached copy.

And while using a cached copy for a write is fine, using a stale copy for a write is not fine.  The domain model must be working with representations that reflect the entire history when enforcing invariants.  More particularly, if a transaction it going to be consistent, then the events calculated at the conclusion of the business logic must take into account earlier events in the same transaction.  In other words, the projection needs to be continuously updated by the work in progress.

This leads to a design where we are using two coordinated data models to support writes: the event backed representation that will eventually be used to update the durable store, and the document backed representation this is used to support the queries needed to enforce the invariant.  The trade book, in effect, becomes its own cache.



We could, of course, mutate the document directly, rather than projecting the new events into it. But that introduces a risk that the document representation we have now won't match the one we create later when we have only the events to work from. It also ensures that any projections we make for supporting reads will have the same state that was used while performing the writes.

To add to the confusion: once the document representation of the model has been rehydrated, the previously committed events don't contribute; they aren't going to be changed, the document supports the queries, updating the event store is only going to append the new information.... Consequently, the existing history gets discarded, and the use case only tracks the new events that have been discovered locally in this update.

Saturday, July 29, 2017

Testing in Threes

One of the reasons that I find immutable tests intriguing as an idea: if you don't change them, you can't break them.

What does it mean for a test to break?

There are two failure modes; a test can fail even though the implementation satisfies the specification that the test is supposed to evaluate, or the test can pass even though the implementation does not satisfy the specification.

If we want to refactor tests safely, then we really need to have checks in place to protect against these failure modes.


My first thought was that we need to keep the old implementation around.  For instance, if we are trying to fix a bug in a released library, then we write a new test, verify that the new test fails when bound to the broken implementation, then bind the test to the current implementation, do the simplest thing that could work, and so on.

Kind of heavy, but we could make that work.  I don't think it holds up very well for the ephemeral versions of code that have between releases.

What we really want are additional checks that are part of the specification of the test.  Turtles all the way down!  Except that we don't need to recurse very far, because the additional tests never need to complicated.  Throughout their lifetime, they are "so simple that there are obviously no deficiencies."

Today's insight is that we are already creating those checks.  Red Green Refactor is an recipe for implicitly creating, in order
  1. The simplest thing that could possibly break
  2. The simplest thing that could possibly work
  3. The production thing.
So at the end, we had all three of these, but because they were the same mutable entity the intermediate stages are no longer at ready hand.

I haven't finished untangling this snarl yet.  My best guess is that it is leading toward the idea that test driving the implementation is a spike for the specification, and that we later come back and make that robust.

Friday, July 14, 2017

On HTTP Status Codes

Originally written in response to a question on Stack Overflow; the community seemed to think the question wasn't appropriate for the site.

Overview of status codes


I'm designing a RESTful API and I have an endpoint responsible for product purchase. What HTTP status code should I return when user's balance is not enough to purchase the specified product (i.e. insufficient funds)?

The most important consideration is that you recognize who the audience of the status-code is: the participants in the document transport application.  In traditional web apis (which is to say, web sites), the audience would be the browser, and any intermediaries along the way.

For example, RFC 7231 uses status codes as a way to resolve implicit caching semantics
Responses with status codes that are defined as cacheable by default (e.g., 200, 203, 204, 206, 300, 301, 404, 405, 410, 414, and 501 in this specification) can be reused by a cache with heuristic expiration unless otherwise indicated by the method definition or explicit cache controls [RFC7234]; all other status codes are not cacheable by default.

If you think of the API consumer (aka the human being) and the API client (aka the web browser) as separate: then the semantics of the status codes are directed toward the API _client_.  This is what tells the client that it can just follow a redirect (the various 3xx headers), that it can simply reset the previous view (205), that it should throw up a dialog asking that the consumer identifier herself (401) and so on.

The information for the consumer is embedded in the message-body.

402 Payment Required

402 Payment Required, alas, is reserved.  Which is a way of saying that it doesn't have a standard meaning.  So you can't deliver a 402 in the expectation that the API client will be able to do something clever -- it's probably just going to fall back to the 4xx behavior, as described by RFC 7231
a client MUST understand the class of any status code, as indicated by the first digit, and treat an unrecognized status code as being equivalent to the x00 status code of that class, with the exception that a recipient MUST NOT cache a response with an unrecognized status code.

I wouldn't bet hard on 402; it was also reserved in RFC 2616, and there's a big gap in RFC 1945 where it should have been.

My guess would be that a 402 specification would be analogous to the requirements for 401, with additional standard headers being required to inform the client of payment options.

As we don't know what headers those would be.  Taler's approach was to stick in a custom header, for instance.  If you control the client, wiring in your own understanding of what 402 might someday be could be a reasonable option.

Protocol alternatives


Another option of good pedigree is to consider that collecting a payment is just another step in the integration protocol.

So, from that perspective, it's perfectly reasonable to say that the request was processed successfully, but the returned representation, rather than providing a link to the cake, provides a link to the billing system.

This is the approach described by Jim Webber when he talks about RESTBucks.  Needing to make a payment is a perfectly normal thing to do in a purchasing protocol, so there's no need to throw an exception when money is due.  Thus, 2xx Success is still a reasonable choice:
The 2xx (Successful) class of status code indicates that the client's request was successfully received, understood, and accepted.
So the _client_ knows that everything went well; and the consumer needs to review the semantics of the message in the message-body to proceed toward her goal.  This is how hypermedia is intended to work -- the current application state is described by the message.

Protocol violations

Now, if instead of proceeding to the payment system as directed, the consumer tries to skip past the purchasing system onto the good bits; that's not so much part of the protocol, so you needed feel compelled to continue to provide a good experience.  400 Bad Request or 403 Forbidden are your go to choices here.

412 Precondition Failed is just wrong; it means that the preconditions provided in the request headers were not met when the server processed the request.  Unless you've got the client providing some extra headers, it's not a fit.
409 Conflict... I believe that one is wrong, but its less clear.  From what I can see in the literature, 409 is primarily a remote editing response -- we tried to apply some change to a resource, but our edit lost some sort of conflict battle with other changes in the system.  For instance, Amazon associates that status-code with BucketAlreadyExists; the problem with the request to create a bucket with that name is that the name has already been taken (and it is a client error, because the client didn't check first).

Sunday, July 9, 2017

Observations on Repositories

During a long brain storming session, I finally had an important breakthrough on the role of repositories in Domain Driven Design.

In short, the repository is a seam between the application component (acting as the client) and the domain component (acting as the provider).  Persistence concerns and the business logic live within the implementation of the repository.

In the literature, I usually find examples where there is just a single implementation of "the aggregate" that is visible everywhere.  But if we think in terms of evolving the model -- in particular
of being able to replace the model with an improved version easily, then we need to be thinking in terms of interfaces, and service providers.

When Evans was first writing of aggregates, the lines between read and write were somewhat blurred; it wasn't unreasonable to expect that your repository could read state out of the aggregate interface..  With the introduction of CQRS, things get more complicated.  If the use case only calls for the application to modify some aggregate, then the interface that represents that aggregate should only have commands in it that are specific to that case.  In other words, the interface provided to the application doesn't need to have any affordances for reading the current state -- it can be tightly tailored to the specific use case.

Sidebar: this is what ensures that we end up with a "rich" domain model; because the application can't get at the state, it has no recourse except to invoke the provided command method and allow the aggregate to implement the change as it chooses.  The query/calculate/update protocol doesn't work if no queries are accessible.

For the repository to save the state of the object, it needs access to the state captured within it.  Which means that the repository needs more intimate familiarity with the aggregate than what was shared with the application.  We can achieve that in a strong typing system, using generics.

The application no longer knows the exact type of the TradeBook it has retrieved; however, the compiler can verify that the argument passed to the repository matches the implementation that was retrieved from the repository.

All of the domain logic -- what changes when we place an order, how is that change represented in memory, how is that change durably stored -- all of those decisions live within the model, somewhere behind the repository interface.  The repository understands how this model represents all of its data because the repository is of the model.  When we swap out the domain model, the repository is exchanged as well.

Most importantly, the question of whether current state is represented as a collection of events, or as an aggregate document, that decision is answered within the domain model, behind the repository facade.

Expressed another way, the composition root will wire up a persistent store, and then inject that store into a domain model that understands the store, and then will wire up the data model and its repositories with the application (as opposed to wiring the application to the persistence store and the domain model independently).


Friday, July 7, 2017

Demonstration: REST is spelling agnostic

An illustration of the power of REST.

I can search google for google
https://www.google.com/search?&q=gle&sourceid=firefox#q=google
To the surprise of absolutely no one, the top hit is google
https://www.google.com
If I click on, or copy, the link, it takes me to something like
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjr_4O62ffUAhXLXD4KHSi8BtMQFggkMAA&url=https%3A%2F%2Fwww.google.com%2F&usg=AFQjCNFePWT_Lkni-D9ikX7wC3eYuDMQYQ
Which in turn _redirects_ me to
https://www.google.com/
And my HTTP client, which knows absolutely nothing about Google, manages just fine.  Google can change their URI space any way they like, and the client just follows its nose.

My carbonware HTTP agent, doesn't notice, because its looking at the links, and the semantic cues, not at the spelling of the underlying identifiers.

As far as the client and the agent are concerned, all of those URI are opaque.  The only things that we can do with them is use them for cache lookups.  The meaning of anything that happens to be encoded in those sequence of bytes is private to the server.

https://www.google.com/search?&q=gle&sourceid=firefox#q=google
This one isn't quite opaque; this URI was constructed by the HTTP client from the data in the submitted form; the pair q=google is a representation of the data entered into the form by the agent, the rest were provided by the server in its representation of the form.

The client and agent have a common understanding of form as a thing of images and UI affordances; the client and the server share a different understanding of form, derived from their common understanding of the HTML media type.

The agent and the server have a common understanding of semantics -- I understand the form from the labels; the client knows how to render the labels, and what to render in them (from parsing the HTML) but the client has no understanding that those labels _mean_ anything.

And it all "just works".

TDD and Immutable Tests

I was working through the bowling game kata again recently, and discovered that there are three distinct moves that we make when playing.  In any given move, we should see either a compile error or a test failure, but never both.

Legal TDD Moves

Refactoring

The most commonly discussed move is refactoring; we change the implementation of the production code, rerun the tests to ensure that the change hasn't broken any tests, and then commit/revert the change that we have made based on the outcome of the test.

The important property here, is that refactoring has a pre-condition that all of the tests are passing, and the post condition that all of the tests are passing.  In other words, the bar is always GREEN when you enter or leave the protocol.

It's during the refactoring move that the production code tends to evolve toward the more generic.

Specifying

New behaviors are introduced by specifying.  What we are doing in this phase is documenting additional constraints on the outcome (by creating a new test), and then hacking the production code to pass this new test in addition to all of the others.

Make the test work quickly, committing whatever sins are necessary in the process.
 In this protocol, we start from a GREEN state, and proceed from there to RED and then to GREEN.  This has the same cadence as the usual "[RED, GREEN], REFACTOR" mantra, but there's an additional restriction -- the new test we introduce is restricted to using the existing public API of the system under test.

Extending

New interfaces are introduced by extending.  During this phase, no new constraints (no asserts) on behavior are introduced, only new affordances for working with the production code.  The key idea in extending is this: you get from the RED bar to the GREEN bar by auto generating code (by hand, if necessary).

Because of these restricted rules, we see only compile errors in this move, no runtime errors; there can't be any runtime errors, because (a) the new interface is not yet constrained by specifications and (b) the code required to reach the new interface already satisfies its own specification.

Extension is almost purely a discovery exercise -- just sit down and write code to the API you wish that you had, then pass the test with automatically generated code.

Immutable Tests

My discovery of extending came about because I was trying to find a protocol that would support the notion of an immutable test.  I've never been particularly comfortable with "refactoring tests", and Greg Young eventually convinced me to just go with it.

The high level argument goes something like this: the tests we write are messages to future developers about the intended behavior of the production code.  As such, the messages should have semantics that be used in a consistent way over time.  So the versioning guidelines apply.

So "refactoring" a test really looks like adding a new test that specifies the same behavior as the old test, checking that both tests are passing, committing, and then in a separate step removing the unwanted, redundant check.

Note that removing the accumulated cruft is optional; duplicate legacy tests are only a problem in two circumstances, analogous to the two phases that created them above.  If you've learned that a specification is wrong, then you simply elide the tests that enforce that specification, and replace them with new tests that specify the correct behaviors. Alternatively, you've decided that the API itself is wrong -- which is a Great Big Deal in terms of backwards compatibility.

Experiences and Observations

Bootstrapping a new module begins with Extension.  Uncle Bob's preference seems to be discovering the API in small steps, implementing enough of the API to pass each time a compile error appears.  And that's fine, but in an immutable test world produces a lot of cruft.

There are two reasonable answers; one is to follow the same habits, creating a _new_ test for each change to the API, and then remove the duplication in the test code later.  It feels kind of dumb, to be honest, but not much more dumb then stopping after each compiler error.  Personally, I'm OK with the idea of writing a single long test that demonstrates the API, then adding afterwards a bunch of checks to specify the behavior.

I'm less happy about trying to guess the API in advance, however.  Especially if we haven't done a spike first -- we're just guessing at the names of things, and the wrong guess means a bunch of test redaction later.

At this point, I'm a big fan of hiding design decisions, so I would rather be conservative about the API.  This means I tend to think about proceeding from supporting specific use cases to supporting a general API.

More precisely, unit tests are dominated by a single shape:
Given this function
When I provide this input
Then I expect the output to satisfy these constraints.
So to my mind, the seed that is in the middle of the first extension is a function.  Because we are operating at the boundary, the input and the constraints will typically be represented using primitives.  I don't want those primitives leaking into my API until I've made a specific choice to lift them there.  So rather than extending a specific type into the test, I'll drop in a function that describes the capability we are promising.  For the bowling game, that first test would look something like


Simplest thing that could possibly work.  If this test passes, I've got code I can ship, that does exactly what it says on the tin.  Then I add a second test, that checks the result.  Then I can engage the design dynamo ; and perhaps discover other abstractions that I want to lift into the API.

The same trick works when I have a requirement for a new behavior, but no clear path to get there with the existing API; just create a new function, hard code in the answer, and then start experimenting with ideas within the module boundary until the required interface becomes clear.  Behind the module, you can duplicate existing implementations and hack in the necessary changes under the green bar, and then run the dynamo from there.

And if the deadline comes crashing down on you before the refactoring is done?  SHIP IT.  All of the hacks are behind the module, behind the API, so you can clean up the code later without breaking the API.

Naming gets harder with immutable tests, because the extension needs a name separate from the specification, and you need two test names to refactor, and so on.

In the long run, extensions are just demonstrations -- here's an example of how the API can be invoked.  It's something that you can cut and paste into production code.  They are perhaps more of a thought construct than an actual artifact that should appear in the code base.

When using functions, the extension check becomes redundant as soon as you introduce the first constraint on its output.

There are some interesting similarities with TDD as if you meant it.  In my mind, green bar means you can _ship_, so putting implementation code in the test method doesn't not appeal at all.  But if you apply those refactoring disciplines behind the module, being deliberately stingy about what you lift into the API, I think you've got something very interesting.

It shouldn't ever be necessary to lift an implementation into the API, other than the module itself.  Interfaces only.


Eventually, the interesting parts of the module get surfaced as an abstraction itself, so that you can apply the same checks to a family of implementations.

Friday, June 23, 2017

The Capability Economy

Pankowecki's essay Tracking dead code in Rails with Metrics kicked loose an idea that I've been meaning to write down.

We have programs that rely upon other programs; this service needs some capability provided by that service.

Riddle: what if we had a currency to express this relationship.

Part one: when A invokes a capability in B, accompanying that message is a bit of fictional currency to grease the wheels as it were.  B starts tracking and reporting its income, both in the aggregate, and on a feature by feature basis.

Part two: if A expects to need some capability in B, then that capability can be reserved.  The simplified version is the healthcheck that A uses to ensure the capabilities that it needs are available also includes a bit of grease, which can be tracked.

Rather than just flags that detect use and reservations, the fictional currency becomes a trackable representation of the business value flowing through the system.

Services can start publishing their own pricing - a landing page of capabilities advertises the associated transaction fees, which can scale up and down depending on demand.

In the large, your systems become an enormous bazaar of implementations competing for bids and haggling with one another.





Saturday, March 18, 2017

RPC vs REST

Inspired by Why should I prefer HTTP REST over HTTP RPC JSON-Messaging style in conjunction with CQRS?

As with most REST questions, the easy answers can be found by looking at the web through a browser.

In RPC, the client creates an application/x-www-url-encoded document and sends it to the server.
In REST, the client completes a form and submits it.

In both cases, the client-stateless-server architectural constraint still applies; the server has no way of knowing anything about the nature of the client.  The message looks the same regardless of whether the client walks through the guided path or skips to the end.

In 2008, Roy T. Fielding made the following observation of a solution he was outlining.
I should also note that the above is not yet fully RESTful, at least how I use the term. All I have done is described the service interfaces, which is no more than any RPC. In order to make it RESTful, I would need to add hypertext to introduce and define the service, describe how to perform the mapping using forms and/or link templates, and provide code to combine the visualizations in useful ways.
My translation: having identified the endpoint of the protocol, you work toward REST by relaxing the understanding of the endpoint required by the client.

The RPC approach to a form submission requires that the client know in advance
  1. The location of the resource that accepts the request
  2. The appropriate HTTP method to use
  3. The correct spellings of the keys in the key/value pairs
  4. The semantic meaning of the keys/value pairs
In the REST approach, this information is delivered on demand, in the hypermedia.  So the location, the method, the keys are all covered.  The client just loads the form, uses semantic cues to recognize which fields to change, and the submits the form.

Of course, this also means that the client needs to understand forms and how to interpret them.  That's not free, but it shifts the problem from knowing in advance about the specific service to knowing in advance about a generic media type, and the conventions for semantic cues.

The riddle of finding the form gets pushed up to the bookmarked representation.  The client still needs a starting point, which is the bookmark.  It loads a representation of the bookmark provided by the server, and uses the available semantic cues to find a link to the required form.

The server can direct the client to a new representation of the form by changing the representation of the bookmark.  The new form representation can include additional fields with new semantic cues; the client, knowing only about the original semantics, simply ignores these fields when filling out the form; which in turn means that the values submitted will be those provided by the server in the form representation itself.

This isn't free, by any stretch -- we're buying the decoupling of the client and server by doing more upfront api design.
REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them.
Stable APIs require a design investment.  The stability constraint, however, is optional; you should reject that constraint when the benefits are insufficient to offset the costs.

Thursday, March 2, 2017

DDD Repository Interfaces

Composed in response to Vladimir Khorikov.

One issue is that the above interface doesn’t constitute an actual abstraction. It just duplicates the concrete class’s functionality. The Principle of Reused Abstractions tells us that, in order for an interface to become one, it needs to have more than one implementation.

If we reboot the Wayback Machine, and take a look at the description provided by Eric Evans, we see at once the existence of other implementations.

Another transition that exposes technical complexity that can swamp the domain design is the transition to and from storage. This transition is the responsibility of another domain design construct, the REPOSITORY

The idea is to hide all the inner workings from the client, so that client code will be the same whether the data is stored in an object database, stored in a relational database, or simply held in memory.
I would certainly expect to see an in memory implementation, used by tests that protect me from errors in refactoring -- I'm going to burn the world as soon as the test passes anyway, so neither writing nor running integration boilerplate adds value.

But Vladimir raises an interesting point
Note that neither integration tests, nor unit tests would require seams that “abstract” the database out from the rest of the code. Unit tests just don’t involve anything other than isolated domain logic. Integration tests verify the database directly as part of the bounded context.
I love that -- it really shows that he has dug deeper into the question, to really think about the principles involved and whether or not they fit.
An application database (a database fully devoted to a single bounded context) is one of such systems. It belongs to your application only and not shared with anyone else.


An application with multiple writers is sharing. Your isolated domain logic doesn't share anything, so you can't check the behavior of conflicting writes that way. Trying to introduce conflicts, in all the paths that you need, during integration testing threatens many nightmares because of the combinatoric explosion of possibilities. If you are going to be refactoring your contingency pathways, you need a seam that discounts the overhead of checking the error to the point that you will actually pay the price. That requires a seam somewhere between the command handler and the process boundary, and the price drops each as you get closer to the handler.

In addition, that seam is a natural place to introduce an in memory cache; why reload an object from the book of record when the copy that you saved is still available? Why treat that optimization as an all or nothing affair when each composition root could be making its own choice on a case by case basis?

Vladimir is absolutely right that the repository (as written here) doesn't really align properly with boundaries. That thought is worth exploring in more detail.

Tuesday, February 28, 2017

Shedding

I haven't seen their API, but by all accounts it is terrible.
However, I have seen the bike shed they painted, and _it_ is terrific.

Sunday, February 19, 2017

TDD Probability Theory

I recently recommended that a friend review the probability kata; which got me back into it again.

 The probability kata was published by Greg Young in June, 2011.  The core of the problem statement was

Write a probability value object. It should contain the following methods:
  • Probability CombinedWith(Probability)
  • Probability Either(Probability)
  • Probability InverseOf()


Well, that certainly seems straight forward enough....

WARNING: Spoilers ahead

Sunday, February 5, 2017

Aggregates and RFC 2119.


The language used to describe the relationship between aggregates and commands is a confusing one.

The usual language is that the aggregate protects the business invariant -- one can reasonably read such at thing, and come away with the idea that aggregates are going to be throwing some flavor of DomainException when a command would break the rules.

Fortunately, we have experts trying to offer guidance.
Commands should not fail in collaborative domain
I struggled with understanding this for quite a while, because it sounded to me as though he was talking about checking that the command is valid before dispatching it to the domain.

The language that turned my brain around, was to consider that the aggregate is responsible for restoring the business invariant.

Udi Dahan, observing that Race Conditions Don't Exist, observed:
A microsecond difference in timing shouldn’t make a difference to core business behaviors. 
Reordering two commands should not change the behavior that you observe.  In particular, you shouldn't be rejecting a command that you would accept if the ordering were different.

This turned up recently in a calendar domain, where the aggregate is responsible for ensuring that events don't overlap.  So let us imagine an attempt to schedule two conflicting events, A and B.  In a naive implementation, the order used to process the commands will determine how the conflict is resolved.



This is an interpretation of the invariant that is consistent with MUST or MUST NOT from RFC 2119.  And that's fine, if that's what the business really needs.  But it's not the only way to interpret a business invariant.


This model of the domain is analogous to SHOULD or SHOULD NOT. We don't want conflicts to happen, but if we do the aggregate has the responsibility of tracking the conflict and whether or not it has been resolved. In a sense, the responsibility of the aggregate is to detect and track conflicts -- maintaining the schedule is a side effect.

To reduce the rate of conflicts, the caller is expected to make a good faith effort to ensure that there are no conflicts before dispatching the command.  After all, if the aggregate is tracking events and conflicts, then the caller can check for both before dispatching the command.  When the data that the caller is working from is stale, that judgment may be off, but the aggregate has a fail safe available to cover that contingency.

A similar example might occur in banking, where instead of rejecting a transaction, the bank instead invokes an overdraft contingency.

If your requirements include a MUST NOT, then push back and check: what is the cost to the business of allowing the behavior to occur?  Especially in business that run on human input, the processes that run the business are designed to mitigate these kinds of problems.  We can do the same thing in code.

Because SHOULD NOT is going to have much nicer scaling properties.

Thursday, January 26, 2017

A RESTful Kitchen Oven

Some time back, I chatted with Asbjørn Ulsberg about an example of a REST api using an oven.  Shortly there after, he presented his conclusions at the Nordic APIs 2016 Platform Summit. [slides]
What follows is my own work, but clearly I was influenced by his point of view.

 In my kitchen, there is a free standing gas range.  On the inside, it's got various marvels that we no longer think about much: an ignition system, and a thermostat, safety valves, etc.

But as a home cook, I really don't need to worry about those details at all.  I work with the touch pad at the top of the unit.  Bake, plus-plus-plus-plus, Start, and I get a beep and a display message letting me know that the oven is preheating.  Some time after that, another beep let's me know that the oven has reached the target temperature and on good days the actual temperature stays near that target until the oven is shut off.

Let's explore, for a time, what it would look like to control the oven from a web browser.


For our first cut, we can look at an imperative approach.

HTTP/1.0 gave us everything we need.  GET allows us to retrieve the information identified by the request uri, and POST allows us to provide a block of data to a data handling process.

In a browser, it might look like this: we load the bookmark URI.  That would get some information resource -- perhaps a menu of the services available, perhaps a representation of the current state of the stove, maybe both.  One of the links would include a relation (as well as human readable cues) that communicates that this is the link to follow to set the oven temperature.  Following that link would GET another resource that is a web form; in this case it would just be an annotated field for temperature (set with the default value of 350F), semantic cues, and a submit button.  Submitting the form would post the contents to the server, which would in turn interpret the submitted document as instructions to present to the oven.  In other words, the web server reads the submitted data, and pushes the control buttons for us.

Having done that, the server would return a 200 status to us, with a representation of the action it just took, and links from there to perhaps the status page of the oven, so that we can read updates to know if the oven has reached temperature.  In a simple interface like the one on my stove, the updates will only announce that the oven is preheating.  A richer interface might include the current temperature, an estimate of the expected wait time, and so on.

Great, that gets us a website that allows a human being to control the stove from a web browser, but how do we turn that into a real API?  Answer: that is a real API.  Anybody can grab their favorite http client and their favorite html parser, write an agent that understands the link relations and form annotations (published as part of this API).  The agent uses the client to load the bookmark url, loads the result into the parser, searches the resulting DOM for the elements it needs, submits the form, and so on.  Furthermore, the agent can talk to any stove at all.

And -- bonus trick: if you want to test the agent, but don't have a stove handy, you can just point it at any web server with test cases represented as graphs of html documents.  After all, the only URI that the agent actually knows anything about is the start point.  From that point on, it's just following links.

That's a nice demonstration of hypermedia, and the flexibility that comes from using the uniform interface.  But it misses two key lessons, so let's try again.

This time, we'll go with a declarative approach.  We need another verb, PUT, defined in the HTTP/1.1 spec (although it had appeared in the quarantine of the earlier appendix D).  Puts early definition got right to the heart of it.
The PUT method requests that the enclosed entity be stored under the

supplied Request-URI.
What the heck does that mean for an oven?


To the oven, it means nothing -- but we aren't implementing an oven, we're implementing a web api for an oven.  To be a web-api for something interesting means to express the access to that bit of interesting as though it were a document store.

In other words, the real goal here is that we should be able to control the stove from any HTTP aware document editor.

Here's how that might work.  We start from the bookmark page as we did before.  This includes a hypermedia link to a representation of the current temperature settings of the oven.  For instance, that representation might be a json document which includes a temperature field somewhere in the schema.  So the document can GET the current state represented as a json document (important note: this representation does NOT include hyperlinks -- we're not going to allow the document editor to modify those.  Instead, transitions away from the editable representation of the document are described in the Link header field.)

We load the document of the current state into our editor, which then follows the "self" relation with an OPTIONS call to discover if PUT is supported.  Discovering that this is so, the editor enables the editing controls.  The human operator can now replace the representation of the current temperature of the oven with a representation of the desired temperature, and click save.  The document editor does a simple PUT of the saved representation to the web server.

PUT is analogous to replace, in this use case.  In the case of a document store, we would simply overwrite the existing copy of the document.  We need to give some thought to what it means for an oven to mimic that behavior.

One possibility is that the resource in question represents, not the current *state* of the oven, but the current *program*.  So we initially connect to the oven the state of the program would be do nothing, and we would replace that with a representation of the program "set the temperature to 300 degrees", and the API would "store" that program by actually commanding the oven to heat to the target temperature.  PUT of a new program actually matches very well with the semantics of my oven, which treats entering the desired state as an idempotent operation.

A separate, read-only, resource would be used to monitor the current state of the oven.

In this idiom, "turning off the oven" has a natural analog for our document editor -- deleting the document.  The editor can just as easily determine that DELETE is supported as PUT, and enable the property controls for the user.

If we don't like the "update the program" approach, we can work with the current state document directly.  We enable PUT on the resource for the editable copy of the current oven state, enabling the edit controls in the document editor.  The agent can describe the document that they want, and submit the result.

The tricky bit, that is key to the declarative approach: the API needs to compare the current state with the target state, and determine for itself what commands need to be sent to the oven to bring about that change.  Just as the keypad insulates us from the details of the burners and valves, so to does the API insulate the consumers from the actual details of interacting with the oven.

Reality ensues: the latency for bringing the oven up to temperature isn't really suitable for an HTTP response SLA.  So we need to apply a bit of lateral thinking; the API reports to the document store that the proposed edit has been Accepted (202).  It's not committal, but it is standard.  The response would likely include a link to the status monitor, which the client could load to see that the oven was preheating.

Once again, automating this is easy -- you teach your agent how to speak oven (the standard link relations, the media-types that describe the states and the programs).  You use any HTTP away document editor to publish the agents changes to the server.  You test by directing the agent to a document store full of fake oven responses.
 
Do we need two versions of the API now? one for an imperative approach, another for the declarative approach?  Not at all -- the key is the bookmark URL.  We require that the agents be forgiving about links they do not recognize -- those can simply be ignored.  So on the bookmark page, we have a link relation that says "this way to the imperative setOvenTemperature interface", and another that says "this way to the declarative setOvenTemperature interface", and those two links can sit next to each other on the page.  Clients can follow whichever link they recognize.

Document editing -- especially for small documents -- is reasonably straight forward in html as well; the basic idea is that we have a resource which renders the current representation of our document inside a  text area in a form, which POSTs the modified version of the document to a resource that interprets it as a replacement state or a replacement program as before.

You can reasonably take two different approaches to enabling this protocol from the bookmark page.  One approach would be to add a third link (inventing a new link relation).  Another would be to use content negotiation -- the endpoint of the setOvenTemperature interface checks to see if the Accept-Type is html (in which case the client is redirected to the entry point of that protocol, otherwise directed back to the previously implement PUT based flow).

Using the HTML declarative flow also raises another interesting point about media types.  Text areas aren't very smart, they support free form text, so you end up relying on the operator not making any data entry errors.  With a standardized media-type, and a document editor that is schema aware, the editor can help the operator do the right thing.  For instance the document editor may be able to assist the agent with navigation hints and a document object model, allowing the agent to seek directly to the document elements relevant to its goal.

Edit: go figure -- having written up my thoughts, I went back to look at Asbjørn Ulsberg's slides and realized that we had originally been talking about toasters, not ovens.





On Anemic Domain Models


The data model has the responsibility to store data, whereas the domain model has the responsibility to handle business logic. An application can be considered CRUD only if there is no business logic around the data model. Even in this (rare) case, your data model is not your domain model. It just means that, as no business logics is involved, we don’t need any abstraction to manage it, and thus we have no domain model.
 Ouarzy, It Is Not Your Domain.

I really like this observation, in that it gives me some language to distinguish CRUD implementations, where we don't have any invariant to enforce, and anemic domains, where there is an invariant, but we don't keep the invariant and the state changing behavior in the same logical unit.

We probably need more variations here
  • There is no invariant (CRUD)
  • There is an invariant, but it is maintained by human beings -- the invariant is not present in the code
  • There is an invariant in the code, but the code is arranged such that modification of state can happen without maintaining the invariant (anemic)
  • There is an invariant in the code, and the code is arranged so that any change of state is checked against the invariant.
Having some anemic code paths isn't wrong -- it's a desirable property for the software to be able to get out of the way so that the operator in the present can veto the judgment of the implementer in the past. 

But if the software is any good, bypassing the domain model should be unusual enough that additional steps to disable the safeties is acceptable.

Thursday, January 19, 2017

Whatever it takes

Whatever it takes is a large cash bonus, paid up front.

A RESTful supply closet

At stackflow, another question was submitted about REST API design for non CRUD actions.  In thinking about it, I found an analogy that may help explain the point.

Imagine a supply closet; an actual physical closet in the real world.  The stock includes boxes of pencils.  What does a web API for the supply closet look like?

As Jim Webber pointed out years ago, HTTP is a document management application.  So the first thing to realize is that we are trying to create an interface that supports the illusion that the supply closet is a document store.  If we want to know about the current state of the closet, we read the latest report out of the store.  To change the state of the closet, we propose changes to the document store.

How do we convert the current state of the closet to a document?  In the real world (think 1950s office), we would ask the quartermaster for the latest inventory document.  If a recent one is available, the quartermaster gives us a copy of that document.  Otherwise, he can look in the closet, count the boxes, and produce send us a copy of the fresh report.

That, fundamentally, is GET.  We ask the API for a copy of the inventory report.  Maybe the API just copies the report that's posted on the closet door, maybe the API goes inside the closet to count everything, maybe the closet isn't accessible, so we just get the last report the API saw.  Doesn't matter, we got a document.

Now, key in the next stage is to realize that the document is not the closet; when we edit the document, boxes of pencils don't magically appear in the closet.  What we need to implement is the illusion that the closet really is a document store.

In our real world model, we read the inventory report, and there aren't enough pencils.  So we create a new document -- a memo to the quartermaster that says "stock more pencils".  When we deliver the memo to the quartermaster, he decides how to get more pencils for the closet -- maybe he gets boxes out of storage, or buys some from the store next door.  Maybe he updates his todo list (another document)  and tells you he'll get back to you.


This is the basic idiom of HTML forms.  We create (POST) a new document to the API, and the API interprets that document as changes to be made to the closet.  The requisition document and the inventory document are different resources.  For that matter, the collection of requisition documents and the inventory document are different resources.  So you need a different namespace of identifiers to work with.

HTTP (but not HTML) also supports another approach.  Instead of interacting with the closet by submitting new documents, we could interact with the closet by proposing edits to the existing documents.

This, to my mind, feels a bit more declarative -- you describe in the edited document the state that you want the closet to be in, and its up to the API to figure out the details of making that happen.  In our analogy, we've sent to the quartermaster a copy of the inventory with a bunch of corrections made to it, and he changes the state of the closet to match the document.

This is PUT -- specifically a PUT to the inventory resource.  Notice that it doesn't change what work the quartermaster needs to do to fill the closet; it doesn't change his schedule for doing that work, it doesn't change the artifacts that he generates while doing the work.  It just changes which document manipulation illusion we are choosing to support.

Now, HTTP is specific about the behavior of the imaginary document store we are mimicking, which is that PUT is an upsertIf we want fine grained control of the contents of the closet ("more pencils, leave everything else alone"), then we need to upsert to a resource with a matching grain.

PATCH is another alternative to introducing finer grained resources; we send the patch to the server, it compares the patched version of the inventory document to the original version, and from there decides what changes need to be made to the closet.

These are all variations of the same fundamental idea - the HTTP request describes the desired end state, and the implementation sitting behind the API figures out how to realize that end.

Thursday, January 5, 2017

Backups?

Until you have performed a successful restore, it's not a backup.

Same idea, different spelling: nobody needs backups; what they need are restores.

Wednesday, January 4, 2017

TDD: A Tale of two Katas

Over the Holiday, I decided to re-examine Peter Siebel's Fischer Random Chess Kata.

I have, after all, a lot more laps under my belt than when I was first introduced to it, and I thought some of my recent studies would allow me to flesh things out more thoroughly.

Instead, I got a really educational train wreck out of it.  What I see now, having done the kata twice, is that the exercise (once you get the insight to separate the non determinable generator from the board production rules) is about applying specifications to a value type, rather than evaluating the side effects of behaviors.

You could incrementally develop a query object that verifies that some representation of a fair chess row satisfies the constraints -- the rules would come about as you add new tests to verify that that some input satisfies/does-not-satisfy the specification, until the system under test correctly understands the rules.  Another way of saying the same thing: you could write a factory that accepts as input an unconstrained representation of a row, and produces a validated value type for only those inputs that satisfy the constraints.

But, as RFC 1149.5 taught us, you can't push a stateless query out of a local minima.

Realizing this -- that the battle I thought I was going to write about was doomed before I even reached the first green bar, I decided to turn my attention to the bowling game kata.

Amusingly enough, I started from a first passing test, and then never moved off of the original green bar.

Part of the motivation for the exercise was my recent review of Uncle Bob's Dijkstra's Algorithm kata.  I wanted to play more with the boundaries in the test, and get a better feel for how they arise as a response to the iteration through the tests.

So I copied (less than perfectly) Uncle Bob's first green bar, and then started channeling my inner Kent Beck

Do you have some refactoring to do first?
With that idea in mind, I decided to put my attention on "maximizes clarity".  There's some tension in here -- the pattern that emerges is so obviously generic that one is include to suggest I was the victim of big design up front, and that I wasn't waiting for the duplication in tests to realize that pattern for me.  So on some level, one might argue that I've violated YAGNI.  On the other hand, if you can put a name on something, then it is already been realized -- you are simply choosing to acknowledge that realization, or not.

In doing that, I was surprised -- there are more boundaries in play than I had previously recognized.

There's a boundary between the programmer and the specification designer.  We can't think at the IDE and have it do the right thing, we actually need to type something; furthermore, that thing we type needs to satisfy a generic grammar (the programming language).

The specification designer is code that essentially responsible for "this is what the human being really meant."  It's the little DSL we write that makes introducing new specifications easy.

There's a boundary between specification design and the test harness -- we can certainly generate specifications for a test in more than one way, or re-use a specification for more than one test.  Broadly, the specification is a value type (describing input state and output state) where the test is behavior -- organized interactions with the system under test.

The interface between the test and the system under test is another boundary.  The specification describes state, but it is the responsibility of the test to choose when and how to share that state with the system under test.

Finally, there is the boundary within the system under test -- between the actual production code we are testing, and the adapter logic that aligns the interface exposed by our production code with that of the test harness.

This bothered me for a while - I knew, logically, that this separation was necessary if the production code was have the freedom to evolve.  But I couldn't shake the intuition that I could name that separation now, in which case it should be made explicit.

The following example is clearly pointless code

And yet this is the code we write all the time

And that's not a bad idea -- we are checking that two outcomes, produced in different ways, match.
But the spellings are wrong: the names weren't telling me the whole story. In particular, I'm constantly having problems remembering the convention of which argument comes first.

It finally sank in: the boundary that I am trying to name is time. A better spelling of the above is:

We write a failing test, then update an implementation to satisfy a check written in the past, and then we refactor the implementation, continuing to measure that the check is still satisfied after each change. If we've done it right, then we should be able to use our earliest checks until we get an actual change in the required behavior.

I also prefer a spelling like this, because it helps to break the symmetry that gives me trouble -- I don't need to worry any longer about whether or not I'm respecting screen direction, I just need to distinguish then from now.

The adapter lives in the same space; it's binding the past interface with the present interface.  The simplest thing that could possible work has those two things exactly aligned.  But there's a trap there -- it's going to be a lot easier to make this seam explicit now, when there is only a single test, than later, when you have many tests using the wrong surface to communicate with your production code.

There's another interpretation of this sequence.  In many cases, the implementation we are writing is an internal element of a larger application.  So when we write tests specifically for that internal element, we are (implicitly) creating a miniature application that communicates more directly with that internal element.  The test we have written is communicating with the adapter application, not with the internal element.

This happens organically when you work from the outside in - the tests are always interfacing with the outer surface, while the rich behaviors are developed within.

The notion of the adapter as an application is a deliberate one -- the dependency arrow points from the adapter to the test harness.  The adapter is a service provider, implementing an interface defined by the test harness itself.  The adapter is also interfacing with the production code; so if you were breaking these out into separate modules, the adapter would end up in the composition root.

Key benefit of these separations; when you want to take a new interface out for a "test drive", you don't need to touch the tests in any way -- the adapter application serves as the first consumer of the new production interface.


Note that the checks were defined in the past, which is the heuristic that reminds you that checking is the responsibility of the test harness, not the adapter application.  The only check that the adapter can reliably perform is "is the model in an internally consistent state", which is nice, but the safety to refactor comes from having an independent confirmation that the application outputs are unchanged.

Another benefit to this exercise: it has given me a better understanding of primitive obsessionBoundaries are about representations, and primitives are a natural language for describing representations.  Ergo, it makes sense that we describe our specifications with primitives, and we use primitives to communicate across the test boundary to the (implicit) adapter application, and from there to our proposed implementation.  If we aren't aware of the intermediate boundaries, or are deferring them, there's bound to be a lot of coupling between our specification design and the production implementation.