Decision provenance is a requirement for “two-way door” decision-making

5 min read

22 hours ago

“What should we build next?” is a question that most product teams ask themselves every two weeks. Usually, the answer is focused on outputs: some variation on “the next item at the top of the backlog.”

But the product is not merely shipped code. Our outputs are governed by product bets — the belief that the code we are shipping will help us move a critical metric and achieve an outcome that is meaningful to our customers and our business. And the thing that makes it a bet is the uncertainty that this particular output will move the lever we want.

The uncertainty of these bets is why the concept of a two-way door decision is so useful. In an ambiguous situation, the best move becomes the one that creates the most clarity: we make the best choice we can and watch for signals.

This means that “what should we build?” is inextricably linked to “what do we need to learn?” To plot where we’re going we need to know where we’ve already been: the provenance behind our past decisions.

Unfortunately, when a decision is wrapped up in a deliverable for hand-off, the “why” behind it is usually lost. Sometimes teams are tasked with inventing hypotheses that would justify the decision, but usually you just hear: the VP thought it was the right thing to do, so now we are doing it.

But all decisions have a lifespan. Even if the VP’s decision was informed by the best available data, its quality will degrade over time: what’s true today is not always true tomorrow. Gradually, the data that the decision was based on will transform into mesofacts. Decision-makers will keep citing the same artifacts and objectives — “mesodecisions,” if you will, — even though they now reflect a world that only exists in the past.

A caveman sitting in a boardroom speaking into a phone: “But we have always done it this way!”
Source: Barron’s Cartoon

Front-line teams — whether customer-facing employees or product teams with a focused view of one metrics dashboard — are usually the first to come into contact with the new reality. These teams are faced with the classic Chesterton’s Fence dilemma: a past decision that is not safe to challenge precisely because they do not know the reasoning behind it.

Mesodecisions that were justified by authority — whether an individual’s or a group’s — effectively become irreversible because we now have no way of evaluating whether or not we should turn around and walk back through the two-way door.

In this environment, it is much harder to practice one of the most important principles of effective teams: “Disagree and Commit.” If you don’t know when — or whether — a decision will get revisited, it becomes much more painful to commit to a decision that doesn’t feel 100% right.

This creates a vicious cycle: if it becomes harder and harder to agree on a course of action, the mesodecision remains unchanged longer and longer. Mesofacts become entrenched as unbreakable tenets, just “the way we do things around here.”

The tenacity of mesodecisions relies on one property: the lack of transparency behind the why. When the VP is explicit that Team Widget’s work is expected to increase click-through rates, and the first release of the Widget leads to a 10% drop, then it is clear to everyone that Team Widget’s framing decisions need to be reviewed, and so anyone is empowered to call it out to the VP.

This complete reversal of outcomes is made possible by one simple change: maintaining the provenance of the decision to form Team Widget in the first place.

The concept of provenance appears across many fields, but to me it is most salient in data governance. The usefulness of data relies on its provenance: if it is out of date or from an unreliable source, the data may not be fit for purpose.

Sadly, the concept rarely appears in software development practice. Across teams at all stages of product maturity, I see data gathering treated as a brief phase: “we did one week of validation a year ago, and defined our annual roadmap based on that.” Their developers may ship releases on a two-week sprint cycle (or even continuously!) but because they are acting on months-old information, their true sprint length is also measured in months.

Research cannot be treated as a phase, because the world never stops changing. Continuous deployment is worthless without continuous learning to tell you when your decisions have exceeded their lifespan. Without the ability to immediately re-evaluate the decision, we turn every two-way door we find into a one-way door.

A very aesthetic wooden holder for 3 cat bowls in a row. But when the cats eat from it they are not lined up in a row; rather, they pile on top of one another in a weird and awkward way
Field research is the only way to compare our assumptions to real user workflows (original post)

Thus, a team working within the framework of product bets needs to intentionally curate the data that informs its decisions. It’s not enough to ship ideas that seem good or get “validated” and then wait for your NPS scores or trailing indicators to reflect the impact. The team needs to identify the most important gaps in its knowledge and then define their next step — whether it’s writing code or not — based on the quickest or cheapest way to close that knowledge gap.

In orgs that maintain decision provenance, it is safe for teams to commit to a decision — even one that some disagree with — because they know that the decision has a lifespan.

And because the team is building to learn, that lifespan is completely within their control because it is defined by the data that their work generates. And since they are curating their data with intention, they can move beyond simply validating ideas and instead dig into critical nuances. In what circumstances might their operating assumptions not be true? How will they find those circumstances?

When this team gathers new data — through the most appropriate method, whether high-fidelity work like shipping an MVP or rapid research methods like speed dating — they are able to follow the provenance of their decisions to identify which ones need to be revisited. If the new data shows that the decision was wrong, they can walk back through the two-way door. If not, they can close that door for good and prevent stakeholders from “circling back” to appealing but disproven ideas.

Published
Categorized as UX