Most teams inherit their metrics the way families inherit furniture. A few pieces are sturdy, some are sentimental, and several make no sense in a modern apartment. Yet the monthly review marches on, with slides full of color and very little signal. The cost of weak KPIs is quiet and compounding. Teams drift. Projects stretch. Leaders spend time debating definitions instead of deciding. After fifteen years of building operating rhythms for product, revenue, and operations groups, I learned that the cure is not more metrics, it is better logic. That is where the (un)Common Logic framework comes in.
(un)Common Logic is a practical way to design KPIs that actually drive decisions. It borrows from control theory, managerial accounting, and a little field anthropology. The promise is simple: if a KPI does not change a decision, it is decoration. If it changes the wrong decision, it is sabotage. Everything in the framework focuses on avoiding those two outcomes.
Why so many KPIs fail on contact with reality
When a dashboard looks crisp and yet the business does not move, you can usually trace the failure to one of five places. I have seen each of these issues burn quarters and sour teams that otherwise had the talent to win.
First, there is metric theater. Someone optimizes for the number that is easy to show, not the reality you need to change. Customer success teams celebrate average satisfaction while churn rises silently in a cohort. Growth teams trumpet a lower cost per lead, yet conversion quality drops, so new customers take twice as long to recover acquisition costs.
Second, there is the aggregation trap. An average compresses the story across segments where behavior diverges. In one client, a single country contributed 40 percent of sales and dragged down global average margin. We celebrated an uptick in “global” gross margin for two months before discovering that every region except one was deteriorating. Averages invite lies of omission.
Third, there is sampling vanity. A funnel looks smooth because the numerator and denominator do not share the same base. We compared a weekly trial-start cohort to a monthly activation event, then pretended that conversion had improved because the time windows overlapped. Once we aligned the cohorts, the truth surfaced fast and it was not rosy.
Fourth, there is false precision. Reporting to two decimals sells the illusion of control. In reality, the metric is noisy. A weekly NPS that swings from 58.2 to 62.1 feels like momentum. It might be only sampling variance. People make bets based on random noise, then lose confidence when the next swing reverses the story.
Finally, there is misaligned cadence. A KPI can be directionally right but timed wrong. If engineering capacity can only shift every quarter and your feature release cadence is monthly, then a weekly feature velocity KPI does not inform a decision the team can take. The mismatch breeds cynicism.
These failure modes are common, yet each can be prevented with consistent logic. That is where (un)Common Logic earns its name. The elements are not exotic. They are just applied with discipline.

The backbone: decision first, then measure
Good metrics trace to a decision. In workshops, I draw a line on a whiteboard. On the left, the decision you need to make at a predictable cadence. On the right, the action you will take when the signal crosses a threshold. Only when those two ends are clear do we fill in the measure, the method, https://troypdiy398.wpsuo.com/data-backed-storytelling-with-un-common-logic and the math.
Here is a concrete example. A marketplace team discussed “take rate” at every meeting. It fluctuated between 12 and 14 percent by week. Heated debates followed. When we wrote the decision on the wall, the room fell quiet. The real decision was whether to adjust pricing for new categories each quarter. Weekly take rate did not inform that decision. Category-level contribution margin over a 6 week cohort did. We replaced the KPI. Arguments dropped, and the team made two pricing adjustments in a quarter that lifted contribution margin by 3 points without harming growth.
The discipline is simple. If you cannot name the decision and the associated action, the KPI is a vanity mirror. Park it in an appendix.
The (un)Common Logic framework at a glance
The framework rests on six linked parts: unit, boundary, cohort, lag, denominator, and countermeasure. The exact words matter less than the order. When you walk through them in sequence, ambiguity dies early.
Unit asks what precisely you are measuring. A booking, a session, a transaction, a user, a dollar. Vague units create slippery math. If you measure revenue per user, define user. Signed in this month, active in the last 28 days, or anyone in the database. A sales leader once insisted repeat rate had improved by 7 percent. It turned out “repeat” counted anyone who had ever bought twice in history, not those who bought twice in the relevant period. Once we fixed the unit, the trend reversed.
Boundary sets inclusion and exclusion. Good boundaries reveal real trade-offs. In a B2B SaaS business, do you include promotional credits in revenue? In retail, do you include staff purchases in average order value? Boundaries feel tedious. They are the difference between signal and mythology.
Cohort groups things by a shared start. Without a cohort, time-based KPIs lie. Marketing loves to report activation rates for all-time users. That hides the decay in newer cohorts and the drag of old behavior. Define cohorts by signup week, acquisition channel, product tier, or geography. Then track the KPI within cohort over a fixed time horizon. Cohorts turn fog into shape.
Lag names the honest delay between cause and effect. Many KPIs are actioned too fast or too slow because the lag is ignored. If you launch a pricing change, looking at same week revenue per user is not sensible in a high consideration purchase path. You need to observe over the period where customers make the decision. Ignore lag and you will oscillate, changing course right before the effect would have been visible.
Denominator choices are where most manipulation hides. Always write the denominator clearly. Conversion is not “orders divided by visitors,” it is “paid orders this week divided by unique visitors who arrived this week and viewed a product page at least once.” Longhand math keeps people honest.
Countermeasure is the piece most dashboards skip. If the KPI crosses a boundary, what happens? Who does it, by when, with which budget? A KPI without a named countermeasure is a score, not a control. Over time, teams that practice countermeasures build a playbook. They learn which actions move which levers at what cost, which is the point of measuring in the first place.
A short field guide: five tests for a useful KPI
Use this quick set of tests when someone proposes or revamps a KPI. If it passes all five, it likely belongs on a leadership dashboard.
- Decision anchored: There is a named decision and a default action if the KPI crosses a threshold. Unit and denominator explicit: No one can confuse the math by switching definitions midstream. Cohort aware: Trends are shown by cohort where relevant, not “all time” blends. Lag matched to the lever: Observation windows fit the underlying behavior, so you are not chasing noise. Countermeasure owned: A specific team is on the hook to respond within an agreed window.
I have walked into companies with 50 metrics on a page and used only these tests to cut the set to 12. Within two cycles, meetings ended 20 minutes earlier and more decisions stuck.
From scorekeeping to steering: aligning cadence and control
An elegant KPI can still fail if the operating rhythm does not support it. The cadence of review must match the cadence of control. A pricing team deciding quarterly should not spend precious weekly time rehashing lagging indicators they cannot influence before quarter end. Instead, they should watch early indicators that actually respond within one or two weeks, like win rate in price-sensitive segments or competitive quote deltas gathered by sales.
Conversely, a security team cannot wait a month to act on intrusion indicators. Their cadences need hours, not weeks. Put the weekly KPI on a wall for pattern recognition, but route the controls through real-time monitoring and on call practices. One fintech client used to escalate every anomaly to a weekly steering committee, which meant obvious fraud patterns ran for days. We rewired the rhythm so that on call analysts had thresholds and authority to act within minutes. The weekly KPI moved from incident count to mean time to containment by category, which allowed the committee to invest in preventative fixes rather than adjudicate live fires. The difference was visible in the next quarter’s loss rate.
The general rule is straightforward. If a KPI cannot trigger a decision at the cadence of review, treat it as context, not a primary control.
Avoiding the averages that lie
There is a reason statisticians wince at global averages. They collapse structure, then people project stories onto what is left. Regional mixes, channel contributions, customer profiles, purchase cycles, and seasonality all push averages around. In one retail network, conversion rate by store bounced between 12 and 20 percent. The global average looked like it improved when a few small high converting stores had a promotion. Leadership celebrated a new training program. It was noise.
To counter this, guardrails matter. Every KPI should have at least two relevant cuts: one by a structural factor like region, segment, or tier, and one by temporal cohort. Keep the number of cuts manageable, but do not rely on a single global line. Use a small multiples view that shows the same axis and range for each segment, so people see relative movement without the eye being tricked by scale shifts.
When you must report a single number, report a weighted measure that matches the decision. For example, report cost per acquired dollar of gross profit rather than cost per acquired customer when contribution varies widely. Weighted measures align incentives, and in practice they quiet silly debates.
Handling lag, noise, and the twitchy executive
No one I know likes sitting on their hands while a chart drifts sideways. The instinct to tweak something early is human. This is where the discipline of lag and noise must be anchored in math, not faith. Two simple devices help.
First, pre-commit to observation windows. If you know that a pricing change takes 6 to 8 weeks to propagate because of billing cycles and the evaluation period, write that down and publish it when you lock the decision. During that window, you will monitor two early indicators for smoke - for instance, win rate on high price sensitivity deals and anecdotal objections logged in the CRM - but you will not reverse course based on week two revenue per account.
Second, use control limits, not trend lines. If a metric is noisy, calculate its expected variance and plot control limits that reflect normal fluctuation. Only act when the observation escapes those limits. The math is elementary and, once done, protects teams from overreacting. I watched a support org ride a weekly “tickets reopened” rate like a roller coaster. We ran a simple variance model, set control limits, and discovered that half of the last year’s “urgent fixes” were reactions to normal noise.
Executives often worry that control limits slow them down. In practice, they do the opposite. They free attention for the places where the signal is real and urgent. The twitchiness fades when leaders see that measured patience compounds.
The messy middle: when KPIs create perverse incentives
Every KPI shapes behavior. That is a feature, not a bug, as long as you anticipate the edge cases. Two patterns repeat across companies.
One, target myopia. Sales teams pressure for lower prices near the end of the quarter to hit a revenue KPI, ravaging long term margin. If your KPI is pure revenue, expect this behavior. A better design brings in quality - gross profit or lifetime value - and pairs it with a bounded incentive. For example, allow tactical discounting within a tier that protects contribution margin. Track the percent of deals with emergency discounts as a health KPI, and cap commission accelerators when that percent crosses a threshold for two cycles.
Two, gaming via classification. Support teams reclassify tickets to meet a time to resolution target. Product teams hide work in discovery columns to keep cycle time low. To counter this, do two things. Use companion KPIs that make the cost of gaming visible, like customer reopened rate or age of backlog items. Then run spot audits with shared definitions. People respect measures that stand up under scrutiny and resist measures that feel like traps.
Incentives should fit the leverage a team holds. If a team cannot materially move a KPI within their span of control, do not tether compensation to it. Instead, make the KPI visible as context and tie rewards to the specific actions that, over time, drive that KPI.
A practical walk through: designing one KPI end to end
Let’s design an onboarding activation KPI for a subscription product. The goal is to help the growth and product teams steer weekly and decide quarterly. Here is how a team I worked with built it.
We began with the decision. Each quarter, the team decides where to invest onboarding effort across three flows. Each week, they decide whether to run an experiment that interrupts the default path. The action thresholds are investment allocation by flow and a temporary fail fast switch if an experiment harms activation by more than a set amount.
We defined the unit as “new account created by a unique email that confirms via link.” We excluded test accounts that use internal domains, and we flagged refund accounts that still complete activation so we could study their patterns separately.
Cohorts were weekly by signup date. For each cohort, we measured activation as “completed at least one core action” within 14 days. That window matched observed behavior, with 80 percent of active users completing within 10 days and the tail finishing within 14. This avoids comparing week one signups to week four activations, which confuses the base.
We chose the denominator as “signups in cohort that completed email confirmation.” Raw signups included a nontrivial bot element. Including confirmation did two things. It cleaned the base and made the countermeasures clearer: if confirmation rate dipped, a separate action was needed before we judged onboarding.
We documented the lag explicitly. Kicking off an onboarding experiment on Monday would meaningfully show up in cohort activation by Friday of the following week, with most of the effect visible by day 10. We agreed not to kill experiments before that, unless the early smoke indicators lit up - error rates in the flow or sharp spikes in abandon at known friction points.
Countermeasures were written before launch. If activation rate dropped below the lower control limit for two consecutive cohorts, the experiment would be rolled back within 48 hours and the previous best flow restored. If activation rate improved above the upper control limit for three cohorts, the candidate became the default and the team invested in polishing the edges. Ownership sat with the PM and the engineering lead, with a named data partner responsible for weekly reporting.
The KPI did not live alone. We tracked two companions that helped us see side effects: early retention at week four and ticket volume per new cohort. In one case, an experiment pushed activation sharply up by encouraging users to connect a bank account early. Ticket volume doubled, and week four retention cratered because the connection failed for a specific bank. Without the companions, the activation KPI would have misled us. With them, we reverted in a day and targeted the fix to the failing integration.
This kind of measured design sounds heavy, yet the team built the muscle quickly. After a month, the process ran in under two hours per week. More to the point, activation rose from 38 to 49 percent over two quarters, and week eight retention improved by 6 points, which showed up in revenue within a quarter.
How to retire a KPI without drama
Metrics stick around long after their utility fades. No one wants to be the person who deletes a chart that a VP used two jobs ago. Untended, dashboards swell and signal declines. The solution is to build a graceful off ramp.
Every quarter, run a short review of the KPI set. For each KPI, ask whether the decision it anchors still exists and whether the countermeasure remains valid. If not, mark it as archive pending. Then run it for another cycle only as context in the appendix. If no one uses it to make a decision, remove it. If objections surface, require a sponsor to reintroduce it with a decision and countermeasure written down. This small governance ritual keeps the dashboard lean and the culture honest.
A practical example: a logistics team tracked “dock to stock” time as a top KPI long after they had invested heavily in process and automation. The number bounced inside a tight control band for a year. The real bottleneck had shifted to supplier readiness. We archived dock to stock, promoted “supplier ASN correctness at receipt” to the main dashboard, and saw a 30 percent reduction in receiving exceptions in the next quarter.
When a KPI should be a narrative instead
Numbers persuade, but some truths hide in narrative. Customer trust, team cohesion, brand sentiment in a niche, partner confidence in roadmap predictability - all can be proxied with numbers, and all benefit from short, disciplined narrative. I ask leaders to carry one narrative KPI per quarter. It is a one page memo with three sections: what we are observing, what we think the causes are, and what we will do. The write up is published with the rest of the KPIs, discussed briefly, and revisited in the next cycle. These narrative KPIs prevent the error of false quantification and keep textured realities in the room.
For example, a payments team noticed a rise in “abandon at 3DS challenge.” The numbers told us where and how much. They did not tell us why. A narrative KPI captured user stories from support tickets, partner feedback from the issuer network, and browser level quirks. The resulting countermeasure was not a feature tweak, it was a cross party workshop with two issuers that changed their challenge flow. Abandon fell by half in the following month. No dashboard would have produced that on its own.
A compact workflow for building KPIs that work
When you need to stand up or refactor a KPI set fast, follow this short sequence. It is the tactical spine of the (un)Common Logic framework.
- Write the decision and the default action first, in plain language. Define unit, boundary, cohort, lag, denominator, and countermeasure. Choose companions that reveal side effects and gameability. Set control limits and observation windows, then freeze them for a full cycle. Assign ownership for action, with response times and budget clarity.
If the process feels deliberate, that is by design. Speed comes from clarity on the front end, not from skimming the steps.
Case notes from the field
A B2B platform saw net revenue retention slide from 116 percent to 103 percent over six months. Three teams owned pieces of the puzzle. Sales chased gross adds. Success chased logo retention. Product chased feature adoption. We reframed the KPI to expansion per retained customer by cohort, with a 90 day lag window and a clean denominator that excluded upsells tied to mandatory migration. Once the math was clear, we ran a three month focus on two actions: packaging a new add on for a specific tier and arming success with a simple playbook at day 45. Expansion climbed back to 9 percent within two quarters. The old KPI had been technically correct and strategically useless.
In a consumer subscription app, weekly active users became a folk religion. The graph moved, but no one agreed why. We designed a north star that better matched the value story: weekly completions of a core outcome per user. It fell at first, which sparked anxiety. Then the product team killed three surface level notifications that inflated opening behavior and added friction to remind users to finish the outcome. Revenue trailed by a month, then grew steadily. The new KPI forced the team to confront what the product was for, not just how often it blinked.
A hardware operations team spent months debating forecast accuracy. It hovered around 60 percent at the SKU level, which made everyone feel incompetent. The reality was simple. Lead times varied by supplier from 6 to 22 weeks and demand was lumpy. We swapped to a KPI that better matched controllable levers: percent of stockouts prevented by safety stock rule compliance in the 8 to 12 week horizon. Accuracy remained a useful analytic, but it stopped being a whipping post. Stockouts dropped by 35 percent after we found two rules that were routinely ignored.

What makes this approach feel different
The name (un)Common Logic carries a small joke. None of the parts are novel. The difference is applying them without exception, even under pressure. The habit of naming the decision first is uncomfortable for teams used to decorating decks. The insistence on cohort math feels pedantic to those who have skated by on big numbers. The idea of writing countermeasures before launch looks like bureaucracy until a crisis hits and the action happens in hours instead of days.
The reward is a culture where dashboards are not theater. Meetings get shorter. Debates get sharper and kinder, because they are grounded in shared definitions. New hires find their footing faster. And leaders can spend more energy on choices that shift the trajectory, not on generating plausible stories about noise.
If you adopt one practice this quarter, make it the unit, boundary, cohort, lag, denominator, countermeasure walk through for each KPI on your main page. Do it once, in writing, and publish it next to the chart. Disagreements will surface early. That is healthy. From there, the rest of the framework clicks into place with practice.
The point of measurement is not to admire performance. It is to change it. With coherent logic and a few sturdy habits, KPIs stop being wallpaper and start being levers. That is the heart of (un)Common Logic.