Wednesday, October 4, 2017

Value Objects, Events, and Representations

Nick Chamberlain, writing at BuildPlease:
Domain Models are meant to change, so changing the behavior of a Value Object in the Domain Model shouldn’t, then, affect the history of things that have happened in the business before this change.
 Absolutely right.

The Domain Model is mutable, therefore having Events take dependencies on the Domain Model means that Events must become mutable - which they are not.

As your domain model evolves, you may add new invariants to be checked, or change existing ones on the Value Object that you’re serializing to the Event Store.
Fundamentally, what Chamberlain is suggesting here is that you may want to replace your existing model with another that enforces stricter post conditions.

That's a backwards incompatible change - in semantic versioning, that would call for a major version change.  You want to be really careful about how you manage that, and you want to be certain that you design your solution so that the costs of doing that are in the right place.
If the Domain Model was mutable, we’d also need versioning it - having classes like RoastDate_v2… this doesn’t match the Ubiquitous Language.
Right - so that's not the right way to version the domain model.  The right way is to version the namespace.  The new post condition introduces a new contract, both at the point of change, and also bubbling up the hierarchy as necessary.  New implementations for those new contracts are introduced.
The composition root chooses the new implementations as it wires everything together.

"Events are immutable" is a design constraint, not an excuse to bathwater the baby.

Yes, we may need to evolve our understanding of the domain in the future that is incompatible with the data artifacts that we created in the past.  From this, it follows that we need to be thinking about this as a failure mode; how do we want that failure to manifest? what options can we support for recovery?

Presumably we want the system to fail safely; that probably means that we want a fail on load, an alert to a human operator, and some kind of compensating action that the operator can take to correct the fault and restore service.

For instance, perhaps the right kind of compensating action is correcting entries.  If your history is tracked an an append only collection of immutable events, then the operator will need to append the compensating entries to the live end of the stream.  So your history processing will need to be aware of that requirement.

Another possibility would be to copy the existing history into a new stream, fixing the fault as you go.  This simplifies the processing of the events within the model, but introduces additional complexity in discovering the correct stream to load.

We might also want to give some thought to the impact of a failure; should it clobber all use cases that touch the broken stream?  That maximizes your chance of discovering the problem.  On the other hand, being more explicit about how data is loaded for each use case will allow you to continue to operate outside of the directly impacted areas.

My hunch is that investing early design capital to get recovery right will also ease the constraints on how we represent data within the domain model.  At the boundaries, the events are just bytes; but within the domain model, where we are describing the changes in the business, the interface of events is described in the ubiquitous language, not in primitives.

ps: Versioning in an Event Sourced System (Young, 2017) is an important read when you are thinking about messages that evolve as your requirements change.

No comments:

Post a Comment