Monday, December 26, 2016

Observations on Write Models

I was recently involved in a protracted discussion of write models, specifically within the context of CQRS and event sourcing.  Some of my observations I was learning on the fly; I want to take some time to express them clearly.

For a write (which is to say, a change of state) to use useful, it must be visible -- without a way to retrieve the state written, the write operation itself might as well be implemented as a no-op.

Three useful ways of implementing a write

1) Write into a shared space.  Examples of this include I/O, writing to a database, writing to shared memory.  The write model makes publicly readable writes.

2) Write into a private space, and provide an interface to repeat those writes to some service provider. Changes to state are locally private, the publishing responsibility lives with the service provider.

3) Write into a private space, and provide an interface to query those writes.  Changes to state are locally private, but we also provide an interface that supports reads.


The middle of these options is just a punt -- "we can solve any problem by adding an extra layer of indirection.

The first approach couples the aggregate to the process boundary -- any write to the aggregate is necessarily tied to the write at the boundary.  This is especially visible in Java, where you are likely to have checked exceptions thrown at the boundary, and bubble up through the interface of the aggregate.

The third option leaves the aggregate decoupled from the boundary, the aggregate tracks changes to the local state, but some other service is responsible for making those changes durable.

In DDD, this other service is normally the "repository"; a simplified template of a command handler will typically look something like



Of course, within Repository.save() there must be logic to query the aggregate for state, or to pass to the aggregate some state writer callback. For instance, almost all of the event sourcing frameworks I've seen have all aggregates inherit some common base class that tracks changes and supports a query to fetch those changes to write them to an event store.

I have two concerns with this design -- the first problem is that the Repository, which is supposed to be an abstraction of the persistence layer, seems to know rather a lot about the implementation details of the model.  The second problem is that the repository is supposed to
provide the illusion of an in memory collection of all objects of that type...  provide methods to add and remove objects, which will encapsulate the actual insertion of removal of data in the data store.
That illusion, I find, is not a particularly satisfactory one -- sending data across a process boundary is not like storing an object in memory.  Furthermore, in this definition, we're palming a card; we've switched the vocabulary from objects to data (state).  The conversion of one to the other is implied.

What if we wanted to make this conversion explicit -- what might that look like?  At SCNA 2012, Gary Bernhardt spoke of boundaries, and the roles that value types play.  Mark Seemann touched on a similar theme in 2011 -- noting that applications are not object oriented at the process boundary.  So what happens if we adapt our example with that principle in mind?



State here is a value type - it is part of the API satisfied by the model; in particular, we want that API to be stable between old versions of the model and new versions of the model. The aggregate -- the object which carries with it the business rules that constrain how the state can change, is really an implementation detail within the model itself. Which is to say that in simple cases we may be able to write


That's a bit "cute" for my taste, but it does make it really obvious how the automated tests for the model are going to go.

Written out this style, I tend to think of the model in a slightly different way; the model performs simulations of business use cases on demand, and the application chooses which outcomes to report to the persistence component.

A curiosity that falls out of this design is that one model could support both event persistence and snapshot persistence, leaving the choice up to the application and persistence components.  The model's contribution might look like



Another effect of this design that I like is the tracking of the original state is no longer hidden. If we're going to support "compare and swap persistence", or merge, then we really need to maintain some sense of the starting point, and I'm much happier having that exposed as an immutable value.

Additionally, because the current state isn't lost within the repository abstraction, we can think about resolving merge conflicts in the model, rather that in the application or the persistence component.  In other words, resolving a merge becomes another model simulation, and we can test it without the need to create conflicts via test doubles.

What does all this mean for aggregates?  Well, the persistence component is responsible for writing changes to the book of record, so the aggregate doesn't need to be writing into a public space.  You could have the model pass a writer the the aggregate, where the writer supports a query interface; I don't yet see how that's a win over having the aggregate implementation support queries directly.  So I'm still leaning towards including an interface that returns a representation of the aggregate.

Which means that we could still use the repository illusion, and allow the repository implementation to query that same interface for that representation.  At the whiteboard, I don't like the coupling between the model and persistence; but if it's simpler, and if we understand how to back out the simplification when it no longer meets our needs, then I don't think there's a problem.  It's important not to get locked into a pattern that prevents future innovation.