Vladimir Khorikov wrote recently about enforcing uniqueness constraints, which is the canonical example of set validation. His essay got me to thinking about validation more generally.
Uniqueness is relatively straight forward in a relational database; you include in your schema a constraint that prevents the introduction of a duplicate entry, and the constraint acts as a guard to protect the invariant in the book of record itself -- which is, after all, where it matters.
But how does it work? The constraint is effective because it blocks the write to the book of record. In the abstract, the constraint gets tested within the database while the write lock is held; the writes themselves have been serialized and each write in turn needs to be consistent with its predecessors.
If you try to check the constraint before obtaining the write lock, then you have a race; the book of record can be changed by another transaction that is in flight.
Single writer sidesteps this issue by effectively making the write lock private.
With multiple writers, each can check the constraint locally, but you can't prove that the two changes in flight don't conflict with each other. The good thing is that you don't need to - it's enough to know that the book of record hasn't changed since your checked it. Logically, each write becomes a compare and swap on the tail pointer of the model history.
Of course, the book of record has to trust that the model actually performed the check before attempting the write.
And implementing the check this way isn't particularly satisfactory. There's not generally a lot of competition for email addresses; unless your problem space is actually assignment of mail boxes, the constraint has generally been taken care of elsewhere. Introducing write contention (by locking the entire model) to ensure that no duplicate email addresses exist in the book or record isn't particularly satisfactory.
This is already demonstrated by the fact that this problem usually arises after the model has been chopped into aggregates; an aggregate, after all, is an arbitrary boundary drawn within the model in an attempt to avoid unnecessary conflict checks.
But to ensure that the aggregates you are checking haven't changed while waiting for your write to happen? That requires locking those aggregates for the duration.
To enforce a check across all email addresses, you also have to lock against the creation of new aggregates that might include an address you haven't checked. Effectively, you have to lock membership in the set.
If you are going to lock the entire set, you might as well take all of those entities and make them part of a single large aggregate.
Greg Young correctly pointed out long ago that there's not a lot of business value at the bottom of this rabbit hole. If the business will admit that mitigation, rather than prevention, is a cost effective solution, the relaxed constraint will be a lot easier to manage.
No comments:
Post a Comment