The client sends commands to the write model. If the write model doesn't understand the messages sent by the client, then (as far as that client is concerned), the model is effective immutable. The effective lifetime of the command itself is very brief - we need momentary agreement.
The read model shares projections with the client. If the client doesn't understand the messages it receives, then (again, from the perspective of this client), the model is write only. The effective lifetime of the projection is again short; once the appropriate view has been updated in the client, the projection can be discarded - we need momentary agreement.
The write model shares events with the read model, but the pattern doesn't hold.
The distinction is simply this: events persist.
You might need to save off commands in a queue, to ensure that they don't stomp on each other, they may need to be scheduled. But we know that it has to be ok for commands to evaporate, because fail fast is a correct expression of congestion control when the application will not be able to meet the SLA.
Similarly, you might persist projections; but that's primarily a performance optimization -- when the cache expires the projection, it will be rebuilt. The client might want to insulate the user from the dynamic nature of the model for a time, but an eventually consistent view will eventually change. That's its nature.
Events are more than just a representation of change pushed across the boundary between the write model and the read model. It also cross the boundary between the write model of today, and the write model of the future.
In particular, that means that putting domain objects directly into the representation of the event is dangerous, because we expect to be aggressively and continuously refining the domain model as we learn more and more about it. In other words, the instability of the domain model in the scale of product lifetime cautions us against mapping our persistent messages too closely to the domain.
We need to prepare for event streams that include multiple instances of the same event emitted by different versions of the model. Which suggests that, for each message in the stream, we'll need a hint in the meta data that indicates the proper recipe for restoring the domain event -- as the model in the past would have written the event knowing when it was going to be read in the future.
Avro? and tag every event in the history of the model with the writer schema of that time? Thrift/ProtocolBuffers, and hope that the evolution of the events can be supported entirely by non destructive schema changes? JSON, because you get the easy part of the answer for free? Take the hit to upgrade the immutable events in your store, so that all events are taken from the same version of the api?
My best guess today? You are going to need a schema eventually - this seems obvious to me as soon as other domains start subscribing to these events.
So the early guess is about how much value you can deliver before you take the plunge, and how expensive the first schema migration will be.
No comments:
Post a Comment