Monday, October 2, 2017

A not so simple trick

Pawel Pacana
Event Sourcing is like having two methods when previously there was one.
 Me
Noooooooo
 In fairness, the literature is a mess.  Let's see what we can do about separating out the different ideas.

Data Models

Let's consider a naive trade book as an example; it's responsible for matching sell orders and buy orders when the order prices match.  So the "invariant", such as it is, is that we never are holding unmatched buy orders and sell orders at the same price.

Let's suppose we get three sell orders; two offering to sell 100 units at $200, and between them a third offer to sell 75 units at $201.  At that point in the action, the data model might be represented this way.


The next order wants to buy 150 units at $200, and our matching algorithm goes to work. The resulting position might have this representation.


And after another buy order arrives, we might see a representation like


After each order, we can represent the current state of the trade book as a document.

There is an alternative representation of the trade book; rather than documenting the outcome of the changes, we document the changes themselves. Our original document might be represented this way


Then, when the buy order arrives, you could represent the state this way


But in our imaginary trade book business, matches are important enough that they should be explicit, rather than implicit; so instead, we would more likely see this representation


And then, after the second buy order arrives, we might see


The two different models both suffice to describe equivalent states of the same entity. There are different trade offs involved, but both approaches provide equivalent answers to the question "what is the state of the trade book right now".

Domain Models

We can wrap either of these representations into a domain model.

In either case, the core interface that the application interacts with is unchanged -- the application doesn't need to know anything about how the underlying state is represented.  It just needs to know how to communicate changes to the model.  Thus, the classes playing the role of the "aggregate root" have the same exposed surface.  It might look like


The underlying implementations of the trade book entity is effectively the same. Using a document based representation, we would see an outline like


Each time an order is placed, the domain model updates the local copy of the document. We get the same shape if we use the event backed approach...


Same pattern, same shape. When you introduce the idea of persistence -- repositories, and copying the in memory representation of the data to a durable store, the analogs between the two hold. The document representation fits well with writing the state to a document store, and can of course be used to update a relational model; perhaps with assistance of an ORM. But you could just as easily copy the event "document" into the store, or use the ORM to transform the collection of events into some relational form. It's just data at that point.

There are some advantages to the event representation when copying state to the durable store. Because the events are immutable, you don't need to even evaluate whether the original entries in the list have changed. You don't have to PUT the entire history; you can simply PATCH the durable store with the updates. These are optimizations, but they don't change the core of the patterns in any way.

Projections

Event histories have an important property, thanks to their append only nature -- updates are non-destructive.  You can create from the event history any document representation you like; you only need to have an understanding of how to represent a history of no events, and how each event type in turn causes the document representation to change.


What we have here is effectively a state machine; you load the start state and then replay each of the state transitions to determine the final state.

This is a natural approach to take when trying to produce "read models", optimized for a particular search pattern.  Load the empty document, replay the available events into it, cache the result, obtain more events, replay those, cache this new result, and so on.  If the output representation is lost or corrupted, just discard it and replay the complete history of the model.

There are three important facets of these projections to pay attention to

First, the motivation for the projections is that they serve queries much more efficiently than trying to work with the event history directly.

Second, that because replaying an entire event history can be time consuming, the ability to resume the projection from a previously cached state is a productivity win.

Third, that a bit of latency in the read use case is typically acceptable, because there is no risk that querying stale data will corrupt the domain.

The Tangle

Most non-trivial domain models require some queries when updating the model.  For instance, when we are processing an order, we need to know which previously unmatched orders were posted with a matching price.  If the domain requires first in first out processing, then you need the earliest unmatched order.

Since projections are much better for query use cases than the raw event stream, the actual implementation of our event backed model probably cheats by first creating a local copy of a suitable projection, and then using that to manage the queries


That "solves" the problem of trying to use the event history to support queries directly, but it leads directly into the second issue listed above; processing the entire event history on demand is relatively expensive. You'd much prefer to use a cached copy.

And while using a cached copy for a write is fine, using a stale copy for a write is not fine.  The domain model must be working with representations that reflect the entire history when enforcing invariants.  More particularly, if a transaction it going to be consistent, then the events calculated at the conclusion of the business logic must take into account earlier events in the same transaction.  In other words, the projection needs to be continuously updated by the work in progress.

This leads to a design where we are using two coordinated data models to support writes: the event backed representation that will eventually be used to update the durable store, and the document backed representation this is used to support the queries needed to enforce the invariant.  The trade book, in effect, becomes its own cache.



We could, of course, mutate the document directly, rather than projecting the new events into it. But that introduces a risk that the document representation we have now won't match the one we create later when we have only the events to work from. It also ensures that any projections we make for supporting reads will have the same state that was used while performing the writes.

To add to the confusion: once the document representation of the model has been rehydrated, the previously committed events don't contribute; they aren't going to be changed, the document supports the queries, updating the event store is only going to append the new information.... Consequently, the existing history gets discarded, and the use case only tracks the new events that have been discovered locally in this update.

No comments:

Post a Comment