◆   Field Dispatch Series — Revenue Operations   ◆   Downloaded, Not Hired   ◆

RevOps Metrics: The 12 Numbers That Actually Matter

Most revenue operations dashboards contain between 40 and 80 metrics. Most RevOps leaders can honestly say that fewer than a dozen of those metrics actually drive decisions. The rest are there because someone once asked for them, or because the CRM makes them easy to pull, or because they fill space in the slide deck and look like rigour without requiring any.

The problem with metric proliferation is not that it wastes screen space. It is that it dilutes attention. When everything is measured, nothing is prioritised. The signal gets lost in the noise. The 12 metrics below are the ones that — when tracked accurately, understood deeply, and acted upon systematically — give you a complete picture of revenue health across pipeline, forecast, acquisition efficiency, and retention. Everything else is a diagnostic sub-metric that feeds one of these twelve. Get these right first.

Pipeline Metrics

1. Sales Velocity

What it measures: the speed at which revenue moves through your pipeline. Calculated as: (Number of Opportunities × Average Deal Value × Win Rate) ÷ Average Sales Cycle Length.

What good looks like: the absolute number matters less than the trend and the component breakdown. Healthy velocity means the formula's inputs are all moving in the right direction: more qualified opportunities, stable or growing ASP, improving win rate, and shortening cycle length.

What to do when it is bad: decompose the formula. If velocity is declining, exactly one of the four inputs is causing it. Find the input — is it deal volume (a pipeline generation problem), ASP (a discounting or mix problem), win rate (a qualification or competitive problem), or cycle length (a champion engagement or procurement problem) — and address that input specifically. See the full analysis in Sales Velocity: The Formula That Predicts Revenue.

2. Pipeline Coverage Ratio

What it measures: the ratio of open pipeline value to revenue target for a given period. A 3x coverage ratio means you have three times the quota target in active pipeline.

What good looks like: depends entirely on your win rate and pipeline quality. A team with a 30% win rate needs at least 3.3x coverage to hit 100% of quota. A team with a 20% win rate needs 5x. Coverage ratios cannot be assessed without knowing the win rate they correspond to — and even then, they tell you nothing about pipeline quality or stage distribution.

What to do when it is bad: insufficient coverage requires a pipeline generation response, not a pipeline management response. You cannot close your way out of a coverage shortfall. When coverage drops below the threshold implied by your win rate, the forecast is already at risk. Surface the gap early — not at the end of the quarter — and treat it as an alarm. The full coverage analysis is in Pipeline Coverage Ratio: What It Hides From You.

3. MQL-to-SQL Conversion Rate

What it measures: the percentage of marketing-qualified leads that are accepted by sales as sales-qualified leads. The single most important measure of sales-marketing alignment quality.

What good looks like: in most B2B organisations, an MQL-to-SQL conversion rate between 25–40% indicates reasonable alignment between marketing lead quality and sales qualification criteria. Below 15%, marketing and sales have different definitions of qualified. Above 60%, the MQL bar is probably too low or the SQL criteria too lenient — you are moving volume through the funnel without filtering for genuine intent.

What to do when it is bad: run a joint audit of MQL and SQL definitions. The root cause is almost always a definitional misalignment, not a quality problem. See MQL to SQL: Why Your Conversion Rate Is Lying to You for the diagnostic framework.

4. Win Rate

What it measures: the percentage of qualified opportunities that close as wins. The most fundamental measure of sales effectiveness against opportunities that are genuinely in play.

What good looks like: industry benchmarks vary widely by segment and ACV, but in most B2B SaaS contexts, a win rate of 20–30% against all opportunities is typical; 35–50% against specifically qualified opportunities indicates a strong process. The more important question than the absolute rate: is it stable or trending, and does it vary significantly by rep, territory, or competitive scenario?

What to do when it is bad: segment win rate before drawing conclusions. Win rate by stage of entry (inbound vs. outbound, SDR-sourced vs. AE-sourced) reveals acquisition quality differences. Win rate by competitor reveals specific competitive weaknesses. Win rate by rep cohort reveals coaching opportunities. A single aggregate win rate is almost useless as a management instrument.

Forecast Metrics

5. Forecast Accuracy

What it measures: the variance between the revenue committed in the forecast and the revenue actually closed, measured at a consistent point in the quarter (typically at the 4-week-to-close mark).

What good looks like: forecast accuracy within ±10% of the committed number at the four-week mark is achievable with disciplined pipeline management. Accuracy within ±5% at close is excellent. The specific threshold matters less than the consistency: a team that is consistently 15% low is more manageable than one that swings between 30% low and 10% high, because the consistent bias can be modelled and corrected.

What to do when it is bad: forecast inaccuracy is almost always a data quality or discipline problem, not a prediction problem. If reps are managing their stage data to avoid scrutiny rather than to reflect deal reality, the forecast will be structurally wrong regardless of the model you apply to it. Fix the data first. See CRM Data Quality: Why Your Forecast Is Always Wrong and Why the Forecast Was Never Real.

THE FRAMEWORK

The full interrogation framework is Dispatch #001 — Pipeline & Forecast Framework. 38 questions across four sections that expose whether your pipeline numbers reflect deal reality or stage management theatre. $97. Instant download.

See the full framework →

Team Performance Metrics

6. Quota Attainment Distribution

What it measures: not the average attainment rate, but the shape of the distribution across all reps. What percentage of fully ramped reps hit above 80%? Above 100%? Is the distribution bell-shaped (healthy) or bimodal (structural problem)?

What good looks like: in a well-calibrated quota system, 60–70% of fully ramped reps should achieve between 80% and 120% of quota. A small cluster (10–15%) should meaningfully overperform. A small cluster (15–20%) should underperform and be managed accordingly. If the overperforming cluster is large and the underperforming cluster is equally large with very few in the middle, you have a territory or quota design problem, not a performance problem.

What to do when it is bad: segment the distribution by territory, tenure, manager, and product line before drawing any conclusions. The segmented view tells you whether the problem is structural (fix the system) or individual (coach the rep). See the full breakdown in Quota Attainment Rate: What the Distribution Reveals.

7. Sales Cycle Length

What it measures: the average number of days from opportunity creation to close, measured across won deals. The most direct measure of commercial efficiency.

What good looks like: benchmarks vary significantly by ACV and segment. SMB SaaS: 14–30 days. Mid-market: 45–90 days. Enterprise: 90–180+ days. The absolute number matters less than the trend and the variance. Lengthening cycle times, or very high variance between similar-sized deals, indicate a process problem or a qualification problem.

What to do when it is bad: analyse cycle length by deal characteristics, not in aggregate. Deals that stall in early stages indicate qualification failure. Deals that stall in late stages indicate procurement friction, champion weakness, or competitive displacement. The fix for each is different. The analysis framework is in How to Shorten Sales Cycle Length Without Discounting.

8. SDR/BDR Productivity

What it measures: the number of sales-accepted opportunities sourced per SDR per month, measured consistently over rolling three-month periods. The leading indicator for pipeline generation capacity.

What good looks like: depends on market segment and outbound vs. inbound ratio. A fully ramped outbound SDR in enterprise should generate three to five sales-accepted opportunities per month. A mixed inbound/outbound SDR in mid-market should generate six to ten. Productivity below these ranges indicates either targeting problems, messaging problems, or capacity constraints in the SDR motion.

What to do when it is bad: separate activity metrics from outcome metrics before diagnosing. An SDR with high activity (calls, emails, LinkedIn touches) and low outcomes (meetings booked, SAOs) has a targeting or messaging problem. An SDR with low activity and low outcomes has a management or motivation problem. The diagnostic is in SDR Productivity: What to Measure and What to Stop.

Efficiency Metrics

9. CAC Payback Period

What it measures: the number of months required to recover the fully-loaded customer acquisition cost from a new customer's gross margin contribution. Calculated as: (Total Sales + Marketing Spend ÷ New Customers Acquired) ÷ (Average MRR per New Customer × Gross Margin %).

What good looks like: for SMB SaaS, under 12 months. For mid-market SaaS, 12–18 months. For enterprise SaaS, 18–24 months can be acceptable if NRR is strong enough to justify the front-loaded investment. CAC payback above 24 months in any segment is a unit economics warning that requires either cost reduction or ACV improvement.

What to do when it is bad: decompose into CAC (the acquisition cost) and ACV (the recovery rate). High payback driven by high CAC requires sales and marketing efficiency improvement. High payback driven by low ACV requires pricing, packaging, or ICP work. The two have different remedies and different timelines. Acting on the wrong one wastes the time you needed to act on the right one.

10. Ramp Time to Productivity

What it measures: the number of months from a new rep's start date to the point at which they achieve a defined productivity threshold — typically 75–80% of full quota attainment for one full month.

What good looks like: SMB AEs: 2–3 months. Mid-market AEs: 3–5 months. Enterprise AEs: 5–9 months. SDRs: 1–2 months regardless of segment. Ramp time above these benchmarks indicates onboarding quality problems, territory assignment problems, or unrealistic initial quota expectations for new hires.

What to do when it is bad: track ramp time by manager and cohort vintage. If ramp time is deteriorating over successive cohorts, your onboarding programme has not kept pace with product complexity or market change. If ramp time varies dramatically by manager, you have a coaching quality problem concentrated in specific parts of the org. Both are solvable, but only if you can see the pattern. The Sales Capacity Planning framework shows how ramp time assumptions directly affect revenue capacity projections.

Retention and Growth Metrics

11. Net Revenue Retention

What it measures: the percentage of ARR retained from existing customers including expansions, contractions, and churns — excluding new customer revenue. NRR above 100% means your existing base is growing without new acquisition.

What good looks like: SMB SaaS: 85–95%. Mid-market SaaS: 100–110%. Enterprise SaaS: 110–130%. Any NRR below 90% in any segment indicates a retention problem severe enough that new business acquisition is subsidising losses rather than building a compounding revenue base.

What to do when it is bad: decompose into gross retention, expansion rate, and contraction rate. Each has a different root cause and a different remediation pathway. The full NRR framework is in Net Revenue Retention: The SaaS Metric That Tells the Truth. The early warning system for improving it is in Customer Health Scores: Build One That Predicts Churn.

12. Pipeline Generation by Source

What it measures: the volume and quality of pipeline created by each sourcing channel — inbound marketing, outbound SDR, AE self-sourced, partner/channel, and customer expansion — broken down by both volume and downstream conversion rate to close.

What good looks like: no universal benchmark, but the distribution matters. Over-reliance on any single source (typically inbound) creates fragility. A diversified pipeline with at least three meaningful sources, each with a tracked and improving conversion rate to close, is the structural goal.

What to do when it is bad: source-by-source conversion analysis will reveal which channels are generating volume without quality (high MQL volume, low win rate) versus quality without volume (few opportunities, high win rate, underinvested). Reallocate investment toward the channels with the best downstream conversion economics, not the best top-of-funnel volume. Volume that does not close is a cost centre with a pipeline label on it.

The purpose of a RevOps metric is to force a decision. If a metric does not reliably change what you do next, it is decoration.

Twelve metrics is not a small number. It is, however, a manageable one. The discipline of RevOps is not in measuring more — it is in measuring what matters and then actually acting on the signal. Each of these twelve metrics has a defined "good" threshold, a diagnostic decomposition when it falls below that threshold, and a set of interventions that address the root cause rather than the symptom. Build your RevOps operating rhythm around these twelve. Review them on a consistent cadence. Decompose them when they move in the wrong direction. The revenue operations teams that separate themselves from the dashboard-maintenance function are the ones that treat every metric as a question to be answered, not a number to be reported. The frameworks for answering those questions are available in the RevOps vs Sales Ops breakdown and across the dispatch archive. Start with the metrics. Follow them to the truth.

DISPATCH #001

Pipeline & Forecast Framework

38 questions that expose whether your pipeline and forecast numbers reflect deal reality — or stage management theatre that collapses at the end of every quarter. $97. Instant download.

Download the Framework — $97 See the framework →
Other Field Notes