It is the QBR for your top three distributors. The CFO has flown in from corporate. The VP of Sales is presenting deck slides he has clearly rehearsed. Slide nine comes up. The headline reads, in confident sans-serif: "Distributor X — strong performer, growing 8% YoY." There is a green arrow. There is a smiling stock photo of a warehouse worker. There is a logo.
Halfway through the slide, the CFO leans forward. "Strong on what? Compared to whom? Are they actually growing the end market, or just buying inventory ahead of the price increase you announced in March?"
The room gets quiet. The VP of Sales does not have a clean answer. He has a relationship with this distributor. He has been to their sales kickoff. He knows the owner's kids' names. But he cannot defend the rating with numbers because the numbers he has are sell-in numbers — what the distributor purchased from him — and the CFO is asking about sell-out, which is what the distributor sold to the end market.
Every senior channel executive has been in that room. The relationship is real. The number is hollow. And the CFO knows it.
The fundamental measurement problem
Most manufacturers measure sell-in. It is the easiest thing in the world to measure because you have the data — it is your own invoice file. You know exactly what each distributor bought from you, when they paid, and how the trend looks quarter over quarter.
The problem is that sell-in tells you almost nothing about whether the distributor is healthy or whether they are actually growing your business in their territory. A distributor can post a 12% sell-in growth number for a quarter while their actual end-customer demand is shrinking by 5%. They are simply buying ahead — loading inventory before a price increase, taking advantage of a stocking program, or pulling forward demand to hit an annual rebate threshold. Three months later, the bullwhip arrives. Their orders go to zero. You scramble to explain to the board why the channel "softened."
The distributors that look the best on a sell-in dashboard are often the ones that are quietly going underwater. Their warehouse fills up. Their cash gets tied up in your inventory. Their willingness to take a stocking position on the new product launch evaporates. By the time you notice, you have lost a year.
Real performance requires sell-out visibility — sometimes called sell-through. You need to know, at a SKU level, what the distributor actually shipped to end customers in the last 30 days. You get this through a channel inventory and POS feed, contractually required as part of the distributor agreement, ingested into your portal, and reconciled against the sell-in record. Without it, you are flying on instruments you cannot read.
The metrics that matter
A real distributor scorecard has eight metrics. Not fifteen. Not three. Eight, because each one tells you something the others cannot, and any fewer leaves a blind spot.
| Metric | Definition | Target range | What it tells you |
|---|---|---|---|
| Sell-through velocity | Units sold to end customers per month, by SKU, by distributor | Tier-A: top quartile of peer set | The single most important number. End-market demand. |
| OTIF (you to them) | Manufacturer's on-time, in-full delivery rate to the distributor | 95%+ | Your performance to the distributor. You owe them this. |
| Line fill rate | % of order lines shipped complete on first attempt | 97%+ | SKU-level fulfillment quality |
| Case fill rate | % of cases shipped against cases ordered | 98%+ | Volume-level fulfillment quality |
| On-time payment rate (DSO) | Days sales outstanding by distributor | Net-30 terms: DSO under 35 | Financial health. Early-warning indicator. |
| Returns rate | RMA dollars / sales dollars, broken out by root cause | Under 2% total; under 0.5% defect | Product quality and ordering accuracy |
| MDF utilization | Accrued vs claimed vs spent, in dollars | 70%–90% claim rate | Whether they are actually marketing your product |
| Training certification | % of distributor's sales reps current on your product training | 80%+ for tier-A partners | Whether they can sell what you make |
| Deal-reg win rate | % of registered deals that close within 90 days | 35%–50% | Quality of pipeline and rep effectiveness |
A few of these need a closer look.
Sell-through velocity is the metric. Everything else is supporting evidence. Velocity is what proves the distributor is moving the product into the end market and not just stocking up. Measure it weekly, trend it 13 weeks rolling, and benchmark it against peer-tier distributors in similar territories. A tier-A distributor in the Midwest selling 220 units per month of a SKU when the tier-A median is 340 has a problem — possibly an outside-rep coverage gap, possibly a competitive issue, possibly a pricing problem. You will not know which until you ask, but the number tells you to ask.
OTIF is the metric you owe them. OTIF measures how well you delivered to them — on time, in full, no shortages, no substitutions, no late ASNs. If your OTIF to a distributor is sitting at 84%, you do not get to complain about their sell-through. You broke their forecast. Their counter sales rep promised a customer a Tuesday delivery and you missed it. They had to source the unit from a competitor. The customer learned the competitor's product also works. You lost the next reorder. OTIF is in the scorecard because the relationship is bidirectional and the scorecard has to show that.
Line fill versus case fill tell different stories. A 98% case fill with an 88% line fill means you are getting most of the volume out the door but missing scattered SKUs — usually low-velocity items that are not stocked deep enough. The distributor's counter team feels every line miss because every line miss is a customer phone call. Track both.
Returns rate has to be split by root cause. RMA dollars rolled up into a single number hide everything. Break it into: defective product, wrong item shipped, ordering error (distributor's mistake), customer remorse, end-of-life sell-back. The first two are on you. The third is on them. The last two are noise. Without the breakout, every quarter is a finger-pointing exercise.
MDF utilization is a three-part number. Accrued is what they earned based on purchases. Claimed is what they actually submitted paperwork for. Spent is what cleared after audit. A distributor accruing $180,000 in MDF and claiming only $40,000 is leaving money on the table — and almost certainly not running the co-marketing programs they promised in the partner plan. The full mechanics of how this works through the portal are covered in the deal registration, lead routing, and MDF claims breakdown.
Training certification matters more than most manufacturers admit. If 30% of a distributor's outside reps have not completed the product training in the last 12 months, those reps are selling the version they remember from three product cycles ago. They are positioning your product against a competitor that no longer exists. The fix is not pleading with the distributor to "encourage their team to take the training." The fix is making certification a tier criterion, exposing the percentage in the scorecard, and showing the distributor where they stand against peers. A separate piece on reframing field reps as channel coaches walks through how to operationalize this.
How the scorecard surfaces in the portal
Two views. They are not the same.
The distributor-facing view shows the distributor their own numbers, with a tier benchmark overlay. They see their sell-through velocity, their OTIF (received from you), their MDF utilization, their training certification status. They see their position within their tier — "you are 3rd of 12 in your tier on sell-through velocity, 9th of 12 on training certification." Anonymized. No competitor distributor names. Just rank.
The manufacturer-facing view shows you all distributors at once, sortable by any column, filterable by tier, region, product line. You can rank tier-A distributors in the Northeast by sell-through velocity. You can find the bottom three tier-B distributors on training certification. You can pull the list of distributors with DSO over 50 days for the credit team's collections call. The view is the operating system for channel management. Channel directors live in it.
The portal is the source. The portal pulls sell-in from your ERP, sell-out from the distributor's POS or channel inventory feed, OTIF from your shipping system, AR aging from finance, MDF data from the partner program module, training data from the LMS, and deal-reg data from the CRM. Pulling those into one view is the job of the distributor network portal — not the job of a CSV macro that breaks every time a sales analyst leaves the company.
Sharing the scorecard back
The cadence is monthly. Not quarterly. Quarterly is too slow to course-correct, and the quarterly review becomes a tribunal instead of a working meeting.
The mechanics are simple. On the first business day of the month, the portal generates a PDF scorecard for each distributor and emails it to the principal owner, the sales manager, and the assigned channel manager from your side. The PDF has a deep link back into the portal. The email subject line is the distributor's tier rank: "April scorecard — you are 3rd of 12 in tier-A Midwest." People open emails about their rank.
The tier-comparison view does most of the work. Telling a distributor "your sell-through is 220 units per month" is information. Telling them "your sell-through is 220 units per month and the tier median is 340" is a conversation. Telling them "you are 9th of 12 on training certification at 54%, the tier median is 78%" is an action item their own team will surface to them.
The format matters. Anonymize the peer set. Show ranks, not names. Show direction of travel — "up two ranks since last month." Avoid commentary in the PDF; let the numbers speak. Save the commentary for the QBR.
The shift this drives is not punitive. It is accountability without humiliation. Distributors are competitive by nature. The scorecard taps that competitiveness without making the manufacturer the bad guy.
Tier promotion and demotion
Tiers cannot be a popularity contest. If they are, the entire channel knows it within two cycles, and the program loses credibility.
The criteria are public. The criteria are based on rolling 12-month numbers. The criteria are reviewed annually, not in the middle of a year when one partner happens to have a bad quarter.
Promotion to tier-A typically requires: top quartile on sell-through velocity in the partner's tier, 100% on-time payment record over the trailing 12 months, zero unresolved MAP violations, 80%+ training certification across the rep team, MDF claim rate above 70%, and active deal-reg participation with a win rate above the program median. A partner who hits all of those is genuinely earning the tier-A economics. A partner who hits four out of six is a candidate for tier-A on the next annual review if they close the gaps.
Demotion from tier-A is the harder one to do, and the one most manufacturers avoid. The triggers are: bottom quartile on sell-through velocity for two consecutive halves, DSO consistently above the policy threshold, two or more substantiated MAP violations in 12 months, training certification below 50%, or an MDF claim rate below 30% (which usually means they are accruing funds and not actually marketing). Demotion does not mean termination. It means the partner moves to tier-B economics — lower discount, less MDF, no co-branded campaigns — until the underlying issue is resolved. Some partners self-correct in two quarters. Some do not, and the tier becomes the path to a clean exit.
Make the criteria public to all distributors at the start of the program year. The complaints disappear when everyone knows the rules. The complaints multiply when the rules feel arbitrary.
A sample scorecard layout
What the distributor actually sees in the portal looks roughly like this:
| Section | Field | This month | Trailing 12 mo | Tier benchmark | Rank |
|---|---|---|---|---|---|
| Volume | Sell-in (units) | 4,820 | 56,400 | 52,300 (median) | 4 of 12 |
| Volume | Sell-out (units) | 4,610 | 54,100 | 51,800 (median) | 5 of 12 |
| Volume | Sell-through ratio | 0.96 | 0.96 | 0.99 (median) | 8 of 12 |
| Service to distributor | OTIF (we shipped to you) | 94.2% | 95.8% | n/a (mfr KPI) | — |
| Service to distributor | Line fill rate | 96.1% | 97.4% | n/a | — |
| Financial | DSO | 32 days | 34 days | 31 days (median) | 7 of 12 |
| Financial | On-time payment rate | 100% | 98% | 99% (median) | 5 of 12 |
| Quality | Returns rate (defect) | 0.4% | 0.5% | 0.3% (median) | 9 of 12 |
| Quality | Returns rate (other) | 1.2% | 1.4% | 1.1% (median) | 8 of 12 |
| Marketing | MDF accrued | $14,200 | $168,000 | $152,000 (median) | 4 of 12 |
| Marketing | MDF claim rate | 76% | 71% | 68% (median) | 5 of 12 |
| Enablement | Training cert (% reps current) | 62% | — | 78% (median) | 9 of 12 |
| Pipeline | Deal-reg submitted | 11 | 124 | 98 (median) | 4 of 12 |
| Pipeline | Deal-reg win rate | 38% | 42% | 39% (median) | 6 of 12 |
The distributor sees this every month. They see their own trend. They see where they sit against peers. They see the four areas where they rank in the back half of the tier. And they know, because the criteria are published, that two of those four areas are tier-A retention triggers.
The hard part — actually using the data
Here is the insight every channel executive learns and most channel programs ignore. Manufacturers measure. They do not act.
The scorecards get built. The portal gets stood up. The PDFs go out monthly for two quarters. And then nothing happens. The bottom-tier distributor stays in tier-A because the VP of Sales has known the owner for fifteen years. The MAP violator stays in the program because their CFO is golf buddies with your CRO. The distributor with 42% training certification keeps getting referred to as "one of our strongest partners" in board decks because they have one big customer that loves them.
The data exists. The decisions do not.
The fix is a decision rights matrix that is written down, signed by the executive committee, and stapled to the channel program governance document. It looks something like this:
| Decision | Trigger evidence required | Authority |
|---|---|---|
| Move from tier-B to tier-A | 12-month rolling data showing all promotion criteria met | VP Channel + VP Sales |
| Move from tier-A to tier-B (demotion) | Two consecutive halves below threshold + documented improvement plan that failed | VP Channel + CRO sign-off |
| Pause MDF accrual | Documented MDF misuse OR claim rate below 30% for 12 months | Director, Channel Programs |
| Pause deal-reg participation | Documented MAP violation OR registered deals being abandoned to direct competitor | Director, Channel Programs |
| Terminate distributor agreement | Material breach OR sustained tier-B-floor performance with failed cure period | CRO + General Counsel + CEO awareness |
| Recover unspent MDF | Annual program-end true-up showing accrued > claimed | Finance, automated |
The matrix matters because it removes the personal politics. The VP of Sales does not have to be the person who tells his fifteen-year relationship that they are being demoted — the matrix is. The CRO does not have to litigate every termination case — the threshold and the cure period either were met or they were not. Decisions become procedural. Relationships survive procedural decisions in a way they do not survive personal ones.
The other thing that makes decisions actually happen is putting the action review on the calendar. Once a quarter, the channel committee meets specifically to review the tier-change candidate list generated by the portal. The portal flags every distributor that crossed a promotion or demotion threshold based on rolling 12-month data. The committee reviews each one, makes a call, and the call is logged in the partner record. Not "we'll discuss this next quarter." A call. Yes, no, defer with documented reason.
Operations leaders running on managed channel programs tend to hold this discipline because the cadence is built into the engagement. Operations leaders running on internal teams alone tend to drift. Either model works. The one that does not work is the one where the matrix exists in a folder and nobody opens it.
What changes when the scorecard is real
The QBR scene at the top of this article does not happen anymore. The CFO walks in. The VP of Sales puts up the scorecard. The distributor's tier rank, sell-through trend, OTIF received, MDF utilization, and training status are on a single page. The CFO asks the same question. The VP of Sales has the answer in three sentences. The distributor relationship is still a relationship — but it is now a relationship with a number under it.
That is what a scorecard is for. Not to replace judgment. To give judgment something to stand on.
See how OpsMind surfaces channel signals automatically so the scorecard data turns into action rather than another monthly PDF that nobody opens.