Thursday, May 24, 2018

enforce failed: Could not match item

The Symptom

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce (enforce-pedantic-rules) on project devjoy-security-kata: Execution enforce-pedantic-rules of goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce failed: Could not match item :maven-deploy-plugin:2.8.2 with superset -> [Help 1]

The Stack Trace

org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce (enforce-pedantic-rules) on project devjoy-security-kata: Execution enforce-pedantic-rules of goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce failed: Could not match item :maven-deploy-plugin:2.8.2 with superset
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(
 at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(
 at org.apache.maven.DefaultMaven.doExecute(
 at org.apache.maven.DefaultMaven.doExecute(
 at org.apache.maven.DefaultMaven.execute(
 at org.apache.maven.cli.MavenCli.execute(
 at org.apache.maven.cli.MavenCli.doMain(
 at org.apache.maven.cli.MavenCli.main(
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
 at java.lang.reflect.Method.invoke(
 at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(
 at org.codehaus.plexus.classworlds.launcher.Launcher.launch(
 at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(
 at org.codehaus.plexus.classworlds.launcher.Launcher.main(
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution enforce-pedantic-rules of goal org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce failed: Could not match item :maven-deploy-plugin:2.8.2 with superset
 at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
 ... 20 more
Caused by: java.lang.IllegalArgumentException: Could not match item :maven-deploy-plugin:2.8.2 with superset
 at com.github.ferstl.maven.pomenforcers.model.functions.AbstractOneToOneMatcher.handleUnmatchedItem(
 at com.github.ferstl.maven.pomenforcers.model.functions.AbstractOneToOneMatcher.match(
 at com.github.ferstl.maven.pomenforcers.PedanticPluginManagementOrderEnforcer.matchPlugins(
 at com.github.ferstl.maven.pomenforcers.PedanticPluginManagementOrderEnforcer.doEnforce(
 at com.github.ferstl.maven.pomenforcers.CompoundPedanticEnforcer.doEnforce(
 at com.github.ferstl.maven.pomenforcers.AbstractPedanticEnforcer.execute(
 at org.apache.maven.plugins.enforcer.EnforceMojo.execute(
 at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(
 ... 21 more

The Cause


The Cure


The Circumstances

I had created a new maven project in Intellij IDEA 2018.1. That produced a build/pluginManagement/plugins section where the plugin entries did not include a groupId element. Maven (3.3.9), as best I could tell, didn't have any trouble with it; but the PedanticPluginManagementOrderEnforcer did.

My best guess at this point is that there is some confusion about the plugin search order

But I'm afraid I'm at the end of my time box. Good luck, DenverCoder9.

Friday, May 11, 2018

Tests, Adapters, and the lifecycle of an API Contract

The problem that I faced today was preparing for a change of an API; the goal is to introduce new interfaces with new spellings that produce the same observable behaviors as the existing code.

Superficially, it looks a bit like paint by numbers.  I'm preparing for a world where I have two different implementations to exercise with the same behavior, ensuring that the same side effects are measured at the conclusion of the automated check.

But the pattern of the setup phase is just a little bit different.  Rather than wiring the automated check directly to the production code, we're going to wire the check to an adapter.

The basic principles at work are those of a dependency injection friendly framework, as described by Mark Seemann.
  • The client owns the interface
  • The framework is the client
In this case, the role of the framework is played by the scenario, which sets up all of the mocks, and verifies the results at the conclusion of the test.  The interface is a factory, which takes a description of the test environment and returns an instance of the system under test.

That returned instance is then evaluated for correctness, as described by the specification.

Of course, if the client owns the interface, then the production code doesn't implement it -- the dependency arrow points the wrong direction.

We beat this with an adapter; the automated check serves as a bridge between the scenario specification and a version of the production code.  In other words, the check stands as a demonstration that the production code can be shaped into something that satisfies the specification.

This pattern gives us a reasonably straight forward way to push two different implementations through the same scenario, allowing us to ensure that the implementation of the new api provides equivalent capabilities to its predecessor.

But I didn't discover this pattern trying to solve that problem...

The problem that I faced was that I had two similar scenarios, where the observable outcome was different -- the observable behavior of the system was a consequence of some configuration settings. Most of my clients were blindly accepting the default hints, and producing the normal result. But in a few edge cases, a deviation from the default hints produced a different result.

The existing test suite was generally soft on this scenario. My desired outcome was two fold -- I wanted tests in place to capture the behavior specification now, and I wanted artifacts that would demonstrate that the edge case behavior needed to be covered in new designs.

We wouldn't normally group these two cases together like this. We're more likely to have a suite of tests ensuring that the default configuration satisfies its cases, and that the edge case configuration satisfies a different suite of results.

We can probably get closer to the desired outcome by separating the scenario and its understanding of ExpectedResult from the specifications.

And likewise for the edge case.

In short, parallel suites with different expected results in shared scenarios, with factory implementations that are bound to a specific implementation.

The promise (actually, more of a hope) is that as we start moving the api contracts through their life cycles -- from stable to legacy/deprecated to retired -- we will along the way catch that there are these edges cases that will need resolution in the new contracts.  Choose to support them, or not, but that choice should be deliberate, and not a surprise to the participants.

Tuesday, May 1, 2018

Ruminations on State

Over the weekend, I took another swing at trying to understand how boundaries should work in our domain models.

Let's start with some assumptions.

First, we capture information in our system because we think it is going to have value to us in the future.  We think there is profit available from the information, and therefore we capture it.  Write only databases aren't very interesting, we expect that we will want to read the data later.

Second, that for any project successful enough to justify additional investment, we are going to know more later than we do today.

Software architecture is those decisions which are both important and hard to change.
We would like to defer hard to change decisions as late as possible in the game.

One example of such a hard decision would be carving up information into different storage locations.  So long as our state is ultimately guarded by a single lock, we can experiment freely with different logical arrangements of that data and the boundaries within the model.  But separating two bounded sets of information into separate storage areas with different locks, then discovering the logical boundaries are faulty makes a big mess.

Vertical scaling allows us to concentrate on the complexity of the model first, with the aim of carving out the isolated, autonomous bits only after we've accumulated several nines of evidence that it is going to work out as a long term solution.

Put another way, we shouldn't be voluntarily entering a condition where change is expensive until we are driven there by the catastrophic success of our earlier efforts.

With that in mind, let's think about services.  State without change is fundamentally just a cache.   "Change" without state is fundamentally just a function.  The interesting work begins when we start combining new information with information that we have previously captured.

I find that Rich Hickey's language helps me to keep the various pieces separate.  In easy cases, state can be thought of as a mutable reference to a value S.  The evolution of state over time looks like a sequence of updates to the mutable reference, where some pure function calculates new values from their predecessors and the new information we have obtained, like so

Now, this is logically correct, but it is very complicated to work with. op(), as shown here, is made needlessly complicated by the fact that it is managing all of the state S. Completely generality is more power than we usually need. It's more likely that we can achieve correct results by limiting the size of the working set. Generalizing that idea could look something like

The function decompose and its inverse compose allow us to focus our attention exclusively on those parts of current state that are significant for this operation.

However, I find it more enlightening to consider that it will be convenient for maintenance if we can re-use elements of a decomposition for different kinds of messages. In other words, we might instead have a definition like

In the language of Domain Driven Design, we've identified re-usable "aggregates" within the current state that will be needed to correctly calculate the next change, while masking away the irrelevant details. New values are calculated for these aggregates, and from them a new value for the state is calculated.

In an object oriented domain model, we normally see at least one more level of indirection - wrapping the state into objects that manage the isolated elements while the calculation is in progress.

In this spelling, the objects are mutable (we lose referential transparency), but so long as their visibility is limited by the current function the risks are manageable.

Sunday, April 22, 2018

Spotbugs and the DDD Sample application.

As an experiment, I decided to run spotbugs against my fork of the DDDSample application.

For the short term, rather than fixing anything in the first pass, I decided to simply suppress the warnings to address them later.

   9 @SuppressFBWarnings("EI_EXPOSE_REP")
   8 @SuppressFBWarnings("EI_EXPOSE_REP2")
 These refer to the fact that java.util.Date isn't an immutable type; exchanging a reference to a Date means that the state of the domain model could be changed from the outside.  Primitive values and immutable types like java.lang.String.

6 @SuppressFBWarnings("SE_BAD_FIELD")
This warning caught my attention, because it flags a pattern that I was already unhappy with in the implementation: value types that hold a reference to an entity.  Here, the ValueObject abstraction is tagged as Serializable, but the entities are not, and so it gets flagged.

That the serialization gets flagged is just a lucky accident.  The bigger question to my mind is whether or not nesting a entity within a value is an anti-pattern.  In many cases, you can do as well by capturing the identifier of the entity, rather than the entity itself.

Those two issues alone cover about 75% of the issues flagged by spotbugs.

Friday, March 30, 2018

TDD: Tales of the Fischer King

Chess960 is a variant of chess introduced by Bobby Fischer in the mid 1990s. The key idea is that each players' pieces, rather than being assigned to fixed initial positions on the home rank, are instead randomized, subject to the following constraints
  • The bishops shall be placed on squares of opposite colors
  • The king shall be positioned between the two rooks.
There are 960 different positions that satisfy these constraints.

In December of 2002, Peter Seibel proposed :
Here's an interesting little problem that I tried to tackle with TDD as an exercise....  Write a function (method, procedure, whatever) that returns a randomly generated legal arrangement of the eight white pieces.
Looking back that the discussion, what surprises me is that we were really close to a number of ideas that would become popular later:
  • We discuss property based tests (without really discovering them)
  • We discuss mocking the random number generator (without really discovering that)
  • We get really very close to the idea that random numbers, like time, are inputs
I decided to retry the exercise again this year, to explore some of the ideas that I have been playing with of late.

Back in the day, Red Green Refactor, was understood as a cycle; but these days, I tend to think of it as a protocol.  Extending an API is not the same flavor of Red as putting a new constraint on the system under test, the Green in the test calibration phase is not the same as the green after refactoring, and so on.

I tend to lean more toward modules and functions than I did back then; our automated check configures a specification that the system under test needs to satisfy.  We tend to separate out the different parts so that they can be run in parallel, but fundamentally it is one function that accepts a module as an argument and returns a TestResult.

An important idea is that the system under test can be passed to the check as an argument.  I don't feel like I have the whole shape of that yet, but the rough idea is that once you are inside the check, you are only talking to an API.  In other words, the checks should be re usable to test multiple implementations of the contract.  Of course, you have to avoid a naming collision between the check and the test.

One consequence of that approach is that the test suite can serve the role of an acceptance test if you decide to throw away the existing implementation and start from scratch.  You'll need a new set of scaffolding tests for the new implementation to drive the dynamo.  Deleting tests that are over fitting a particular implementation is a good idea anyway.

One of the traps I fell into in this particular iteration of the experiment: using the ordering bishops-queens-knights is much easier to work with than attacking the rook king rook problem.  I decided to push through it this time, but I didn't feel like it progressed nearly as smoothly as it did the first time through.

There's hidden duplication in the final iteration of the problem; the strategies that you use for placing the pieces are tightly coupled to the home row.  In this exercise, I didn't even go so far as encapsulating the strategies.

Where's the domain model?  One issue with writing test first is that you are typically crossing the boundary between the tests and the implementation; primitives are the simplest thing that could possibly work, as far as the interface is concerned.

Rediscovering that simplest thing was originally a remedy for writer's block was a big help in this exercise.  If you look closely at the commits, you'll see a big gap in the dates as I looked for a natural way to implement the code that I already had in mind.  A more honest exercise might have just wandered off the rails at that point.

I deliberately put down my pencil before trying to address the duplication in either the test or the implementation.  Joe Rainsberger talks about the power of the TDD Dynamo; but Sandi Metz right points out that keeping things simple leaves you extremely well placed to make the next change easy.

During the exercise, I discovered that writing the check is a test, in the sense described by James Bach and Michael Bolton.
Testing is the process of evaluating a product by learning about it through exploration and experimentation
This is precisely what we are doing during the "Red" phase; we are exploring our candidate API, trying to evaluate how nice it is to work with.  The implementation of the checks can later stand in as an example of how to consume the API.

The code artifacts from this exercise, and the running diary, are available on Github.

Sunday, March 11, 2018

TDD in a sound bite

Test Driven Development is a discipline that asserts that you should not implement functionality until you have demonstrated that you can document the requirements in code.

Monday, February 5, 2018

Events in Evolution

During a discussion today on the DDD/CQRS slack channel, I remembered Rinat Abdullin's discussion of evolving process managers, and it led to the following thinking on events and boundaries.

Let us begin with out simplified matching service; the company profits by matching buyers and sellers.  To begin, the scenario we will use is that Alice, Bob, and Charlie place sell orders, and David places a buy order.  When our domain expert Madhav receives this information, he uses his knowledge of the domain and recognizes that David's order should be matched with Bob's.

Let's rephrase this, using the ideas provided by Rinat, and focusing on the design of Madhav's interface to the matching process.  Alice, Bob, Charlie, and David have placed their orders.  Madhav's interface at this point shows the four unmatched orders; he reviews them, decides that Bob and David's orders match, and sends a message describing this decision to the system.  When that decision reaches the projection, it updates the representation, and now shows two unmatched orders from Alice and Charlie, and the matched orders from Bob and David.

Repeating the exercise: if we consider the inputs of the system, then we see that Madhav's decision comes after four inputs.

So the system uses that information to build Madhav's view of the system. When Madhav reports his decision to the system, it rebuilds his view from five inputs:

With all five inputs represented in the view, Madhav can see his earlier decision to match the orders has been captured and persisted, so he doesn't need to repeat that work.

At this point in the demonstration, we don't have any intelligence built into the model. It's just capturing data; Udi Dahan might say that all we have here is a database. The database collects inputs; Madhav's interface is built by a simple function - given this collection of inputs, show that on the screen.

Now we start a new exercise, following the program suggested by Rinat; we learn from Madhav that within the business of matching, there are a lot of easy cases where we might be able to automate the decision. We're not trying to solve the whole problem yet; in this stage our ambitions are very small: we just want the interface to provide recommendations to Madhav for review. Ignore all the hard problems, don't do anything real, just highlight a possibility. We aren't changing the process at all - the inputs are the same that we saw before.

We might iterate a number of times on this, getting feedback from Madhav on the quality of the recommendations, until he announces that the recommendations are sufficiently reliable that they could be used to reduce his workload.

Now we start a new exercise, where we introduce time as an element. If there is a recommended match available, and Madhav does not override the recommendation within 10 seconds, then the system should automatically match the order.

We're introducing time into the model, and we want to do that with some care. In 1998, John Carmack told us
If you don't consider time an input value, think about it until you do -- it is an important concept

This teaches us that we should be thinking about introducing a new input into our process flow

Let's review the example, with this in mind. The orders arrive as before, and there are no significant changes to what we have seen before

But treating time as an input introduces an extra row before the match is made

While that seems simple enough, something arbitrary has crept in. For example, why would time be an input in only 10 second bursts?

Or perhaps it's a better separation of concerns to use a scheduler

And we start to notice that things are getting complicated; what's worse, they are getting complicated in a way that Madhav didn't care about when he was doing the work by hand.

What's happening here is that we've confused "inputs" with "things that we should remember". We need to remember orders, we need to remember matches -- we saw that when Madhav was doing the work. But we don't need to remember time, or scheduling; those are just plumbing constructs we introduced to allow Madhav to intercede before the automation made an error.

Inputs and thing we should remember were the same when our system was just a database. No, that's not quite right; they weren't the same, they were different things that looked the same. They were different things that happened to have the same representation because all of the complicated stuff was outside of the system. They diverged when we started making the system more intelligent.

With this revised idea, we have two different ways of thinking about the after situation; we can consider its inputs

Or instead we can think about its things to remember

"Inputs" and "Things to Remember" are really really lousy names, and those spellings don't really lend themselves well to the ubiquitous language of domain modeling. To remain in consistent alignment with the literate, we should use the more common terminology: commands and events.

In the design described so far, we happen to have an alignment of commands and events that is one to one. To illustrate, we can think of the work thus far as an enumeration of database transactions, that look something like:

In the next exercise, consider what would happen if we tried to introduce as an invariant that there should be no easily matched items (following the rules we were previously taught by Madhav) left unmatched. In other words, when David's order arrives, it should be immediately matched with Bob's, rather than waiting 10 seconds. That means that one command (the input of David's order) produces two different events; one to capture the observation of David's order, the other to capture the decision made by the automation as a proxy for Madhav.

The motivation for treating these as two separate events is this: it most closely aligns with the events we were generating when Madhav was making all of the decisions himself. Whether we use Madhav in the loop making the decisions, or simply reviewing scheduled decisions, or leaving all of the decisions to the automation, the captured list of events is the same. That in turn means that these different variations in implementation here do not impact the other implementations at all. We're limiting the impact of the change by ensuring the the observable results are consistent in all three cases.