Something doesn't add up.
My dashboards look fine. Tasks completed per week, hours logged against projects, shipped features, closed tickets, initiatives moving from "in progress" to "done." The numbers say the operation is productive, but it doesn't always feel productive. That gap has been bothering me for months, and I think I finally figured out what's wrong.
The output is there. The work is getting done. But the choices that shape the work are taking forever to get made. A vendor contract that needs a yes or no idles for nine days because it requires input from two people who are both productive by every measure we track. A product question that should take an afternoon circles through three Teams threads and a meeting before someone calls it. The machine is running. The steering wheel is stuck.
I've been staring at this pattern long enough that I have a hypothesis: productivity is the wrong thing to measure. What I should be measuring is how long decisions take.
The Concept: Decision Latency
Decision latency is the elapsed time between when an organization has enough information to make a call and when someone actually makes it. Not the speed of work, but the speed of choice.
Productivity is a lagging indicator. It tells you what already happened. Decision latency, I think, is a leading one. It tells you what's about to happen. When decisions move fast, work flows. When decisions stall, even the most productive team starts building on sand because they're guessing at direction instead of operating on a confirmed one.
I suspect most operators confuse activity with momentum. A team can be extremely busy and completely stuck if the five decisions that determine what they should be busy on are sitting in someone's inbox. That's what I think is happening in my operation, and I want to prove it.
The Experiment
I'm going to run this for six weeks and see what happens. Here's the design.
Step one: classify every decision into three buckets.
Operational decisions keep the engine running. Vendor approvals, budget allocations under a threshold, process changes, hiring calls. My hypothesis is these should resolve in one to two business days.
Strategic decisions carry longer consequences. New product direction, partnership commitments, or organizational changes. I'll give these a week, maybe two depending on the scope.
The third dimension is reversibility. I have a strong suspicion that most decisions taking a week are completely reversible (more or less). We're treating every choice like it's permanent, adding review layers and consensus loops to calls that could be made by one person and unwound the next day if they were really catastrophic. If the data confirms that, well, it's a system design failure, not a people failure.
Step two: start the clock at the right moment.
Not when a decision is first raised, because sometimes you genuinely need more data. The clock starts when the information is there and the only thing missing is the call. That's the honest measurement. Everything before that is research, everything after is latency (i.e. wasting time).
Step three: build a decision rights map.
Every recurring decision type gets an explicit owner and a time standard. Operational calls under a certain dollar threshold get pushed down to the people closest to the work, no escalation required. Strategic decisions get a defined window and a default: if no decision is made by the deadline, the recommendation on the table stands.
That default clause is the part I'm most curious about. In theory, it reverses the incentive. Without it, delay is free. Nobody pays a visible cost for sitting on a decision. With a default in place, inaction becomes a choice with consequences, and people should start engaging faster because they'd rather make the call than have it made for them.
In theory. I'll find out if it works in practice.
What I Expect to Find
Three things I'm watching for.
First, I think most of the latency won't be at the top. I expect to find myself as a bottleneck on some items, but I'm betting the bigger pattern is middle-layer decisions where ownership is ambiguous. Nobody is sure whether they have the authority to decide, so they escalate or wait for implicit permission. The org chart says one thing. The actual decision flow says something else. If that's what shows up, the fix isn't about speed, but about clarity.
Second, I'm expecting the reversible decisions to take almost as long as the irreversible ones. If they do, that's another system problem, not the people involved. If our process treats a $2,000 software purchase the same as a $200,000 partnership commitment, then we've built friction into the wrong places.
Third, and this is the one that might sting, I suspect the decisions I'm slowest on are the ones I'm least excited about. Not the hardest ones... the boring ones. Contract renewals, vendor switches, operational tweaks that need attention but don't offer any intellectual reward. If I'm unconsciously deprioritizing them, the downstream cost could be significant because teams are waiting on me for things I haven't even realized are in my queue. I'm betting others fall into this bucket too.
Why I'm Sharing This Now
I could have waited until I had clean results and a tidy framework. But I think the experiment itself is worth sharing because the problem it's trying to solve is universal. Most businesses measure what's easy to measure. Tasks completed, revenue booked, tickets closed. Those numbers have the advantage of being clean and unambiguous. Decision latency is messier. It requires you to define when a decision was ready to be made and who owns it, and that means looking at your organization honestly rather than the way your org chart says it should work.
The picture might be unflattering. Actually, I'm counting on it. That's the whole point.
I'll report back on what the numbers say. Right now all I have is a hypothesis and a clock that just started ticking.
Keep building,
-- JW