Monday, November 9, 2015

Domain Driven Design vs REST

For a couple of weeks now, I've been banging my head against Domain Driven Design (DDD), Command Query Responsibility Segregation (CQRS), and Representational State Transfer (REST).

I had been making a big, and probably common mistake: I started looking for nouns.  The ubiquitous language gives me lovely nouns, and they seemed a natural fit for resources.

But I couldn't get the same natural feeling from the verbs.  In the ubiquitous language, I've got all of these lovely expressive verbs to motivate change in my business model.  In my link relations, I've got GET, PUT, POST, DELETE.

I finally tracked down Jim Webber's DDD in the Large presentation.  Yeah, that helped.

Application vs Domain

Taking it very slowly: the key idea underlying REST is "Hypermedia as the Engine of Application State".

Application State.

Why am I thinking about trying to represent my aggregate roots as resources?  Those are two completely different layers!  The application layer talks to the domain layer, there's an interface between the two, but there's no particular reason to expect a one to one correspondence.

All of the RESTful bits are going to be over there, with the anti corruption logic.

Commands as Resources

 Resources were the second bit that I had flat out gotten wrong.  It had occurred to me that I could just cheat, turn the problem around, and use my commands as resources.

What Jim's talk clarified for me: that's not cheating, it's the whole damn point.

The hand wavy argument is that we are looking for nouns, and the commands, as messages, are the noun that we want.  No kidding, our resources are representations of little pieces of paper, a ToDo on a post it note, that the client is passing to the server.

It still feels like a cheat to me.

But Jim in his talks points out, rightly, that if you are communicating over HTTP, then you are using a document management system to communicate your applications state.  So if you aren't passing documents, you're clearly Doing It Wrong.

That's getting closer, but I needed to make one more connection to sell myself.

Stepping back from the problem; ignore the REST constraint completely.  The client, over there, needs to communicate with the server here.   We crossing a boundary; if we watch on the wire, we're not going to see objects in the transfer, but data.  We see the same thing when the client queries a projection - Data Transfer Objects are being exchanged.

Data Transfer Object is a synonym for document.  Oh.

I find it a little bit easier to sneak up on the idea by looking at the interaction with the read model.  The client sends us a query, and we send back some projection of the model.  It doesn't make sense to think about caching a model (it changes in time), and caching the projection (which also changes in time) is similarly dubious, but caching a snapshot of the projection at some point in time -- that does make sense.  "Get me the report as of" sure sounds like a document to me.

Something similar happens with domain events, and the communication between the read model and the write model.  "Stream of Events" might not sound like a document, but journal, ledger, log -- those certainly are.

Commands as documents?  I mentioned the ToDo analog earlier, but another good fit would be orders.

Model Change

Of course, the whole point to this mess is to have an application that can interact with the model, so something needs to connect the two.

The read model, that's easily managed -- the queries arrive, the appropriate event history is loaded into the projection, and report is generated and delivered.  All of these steps are idempotent and safe - we might change some state in memory, like caching the projection data for a time in case we are about to need it, but the model and the event history are not changed at all.

The write model is more difficult - changing the model is a side effect of the arrival of the command, and the command may arrive more than once.

For instance, the client puts a command, the command is received and executed, but the acknowledgement of the command is lost in transit.  As PUT is supposed to be idempotent, the client may send the command document a second time.

The model, it should be running the commands given to it.  So you need either that the commands in the model handle commands idempotently, or you need the anti-corruption layer to do the right thing when the duplicated command arrives.

An example scenario: Alice puts to the server.  The server executes the command, updating the history of the model, and publishes a reply.  That reply is lost in transit.  Bob puts to the server, updating the model further.  Alice times out waiting for the acknowledgment that her command arrived, and resends it.  Charlie puts to the server, but is data is stale because he was working from a state prior to Alice's first command.

What should the responses look like?

Bob, clearly, successfully delivered a command that was executed, he should see a status 201 Created message.   Charlie's command should be rejected, because the preconditions under which he submitted the command were not met, which probably means a 409 or 411 response.  Alice's first command should like Bob, get a 201.  When she resubmits the same command a second time, it is still supposed to be an idempotent operation, so she should still be getting the 201 response (and not the error code seen by Charlie).

That probably means a message store: to cache the response in the application layer for a time so that it can be replayed without interacting with the model.

My feeling is that the command should be considered immutable by the client; a second put that doesn't agree with the first should be rejected.  The delete by the client might be a way to incorporate an acknowledgement, and expire the data in the message store.

No comments:

Post a Comment