Thursday, March 2, 2017

DDD Repository Interfaces

Composed in response to Vladimir Khorikov.

One issue is that the above interface doesn’t constitute an actual abstraction. It just duplicates the concrete class’s functionality. The Principle of Reused Abstractions tells us that, in order for an interface to become one, it needs to have more than one implementation.

If we reboot the Wayback Machine, and take a look at the description provided by Eric Evans, we see at once the existence of other implementations.

Another transition that exposes technical complexity that can swamp the domain design is the transition to and from storage. This transition is the responsibility of another domain design construct, the REPOSITORY

The idea is to hide all the inner workings from the client, so that client code will be the same whether the data is stored in an object database, stored in a relational database, or simply held in memory.
I would certainly expect to see an in memory implementation, used by tests that protect me from errors in refactoring -- I'm going to burn the world as soon as the test passes anyway, so neither writing nor running integration boilerplate adds value.

But Vladimir raises an interesting point
Note that neither integration tests, nor unit tests would require seams that “abstract” the database out from the rest of the code. Unit tests just don’t involve anything other than isolated domain logic. Integration tests verify the database directly as part of the bounded context.
I love that -- it really shows that he has dug deeper into the question, to really think about the principles involved and whether or not they fit.
An application database (a database fully devoted to a single bounded context) is one of such systems. It belongs to your application only and not shared with anyone else.

An application with multiple writers is sharing. Your isolated domain logic doesn't share anything, so you can't check the behavior of conflicting writes that way. Trying to introduce conflicts, in all the paths that you need, during integration testing threatens many nightmares because of the combinatoric explosion of possibilities. If you are going to be refactoring your contingency pathways, you need a seam that discounts the overhead of checking the error to the point that you will actually pay the price. That requires a seam somewhere between the command handler and the process boundary, and the price drops each as you get closer to the handler.

In addition, that seam is a natural place to introduce an in memory cache; why reload an object from the book of record when the copy that you saved is still available? Why treat that optimization as an all or nothing affair when each composition root could be making its own choice on a case by case basis?

Vladimir is absolutely right that the repository (as written here) doesn't really align properly with boundaries. That thought is worth exploring in more detail.

No comments:

Post a Comment