◆   FIELD DISPATCH SERIES — REVENUE OPERATIONS   ◆   DOWNLOADED, NOT HIRED   ◆

Why Your Revenue Plan Falls Apart in February

The plan was built in November. It was approved in December. By February, it's already wrong — not because the market changed, but because the assumptions it was built on were never tested. The headcount was aspirational. The pipeline coverage was historical. The ramp times were what the model needed, not what the evidence showed.

Every revenue organisation goes through some version of this. The plan is ratified by the board, communicated to the team, and then quietly renegotiated as reality arrives. The question nobody asks in December is whether the plan was ever going to work.

This isn't a forecasting problem. It isn't a headcount problem. It isn't even a market problem, most of the time. It's an assumptions problem — and assumptions are the one part of revenue planning that almost nobody audits before the plan gets signed off.

How Plans Actually Get Built

The process is familiar. The CEO or CFO sets a top-line target — usually derived from investor expectations, market comparables, or the number that makes the next funding round credible. The VP Sales is then asked whether it's achievable. That's a question nobody answers honestly, because the honest answer has consequences: you either agree to a number that's too high, or you signal that you lack ambition. Neither outcome is comfortable.

A bottom-up build happens, but it's typically reverse-engineered to match the top-down number. The model shows the right answer because it was built to show the right answer. Assumptions are adjusted until the output lands where it needs to. Pipeline coverage is inflated. Ramp times are shortened. Win rates are set to the best quarter in recent memory rather than the median.

Nobody audits the inputs. The question everyone asks is "does the model produce the right number?" The question nobody asks is "are the inputs to the model actually true?"

The result is a plan that is arithmetically coherent and operationally fictional. It adds up on the page. It doesn't hold up in the field.

Q1 arrives and none of the assumptions land as modelled. The pipeline that was supposed to carry the first quarter didn't close at the rates the model predicted. The new hires who were supposed to be productive by March are still finding their feet. The Q4 deals that slipped are not behaving like healthy Q1 pipeline. And by mid-February, the VP Sales is renegotiating the quarter while everyone pretends the annual plan is still intact.

The Three Assumptions That Kill Plans

1. Linear Pipeline Build

Plans assume pipeline builds steadily through the year — a smooth ramp from January to December, with each quarter contributing roughly its proportional share. It never works like that, and everyone who has spent more than two years in a revenue role knows it.

Q1 is always the hardest quarter. Deal volume from Q4 that didn't close is impaired. New business is slow to generate because buyers are still establishing their own budgets and priorities. The enterprise sales cycles that started in Q1 won't close until Q3 at the earliest. The plan that shows a smooth revenue ramp from January to December is describing a business that doesn't exist.

The practical consequence is that Q1 almost always underdelivers against plan — not because the team performed badly, but because the plan was built on an assumption about Q1 pipeline generation and conversion that had never been tested against how Q1 actually looks. If you ran this year's Q1 against your Q1 data from the past three years, the plan would look obviously wrong. Most organisations don't run that comparison before the plan is filed.

2. New Hire Ramp Matching the Model

Most plans use a three-month ramp for new sales hires. Most reps take five to seven months to reach full productivity. The difference doesn't show up in the model because nobody wants to show it. It shows up in September when the plan is 40% behind and everyone acts surprised.

The ramp assumption is one of the most consequential inputs in any revenue plan, and it is almost never validated against real data. To validate it properly, you need to look at your actual cohorts: what did the reps hired in Q1 last year produce in their first six months? What about Q3 hires? How does ramp vary by segment, product, and territory? Most organisations don't track this at that level of granularity, which means the model gets a made-up number and the plan inherits the fiction.

A three-month ramp assumption applied to ten new hires represents months of productive selling capacity that simply won't materialise. When that capacity was supposed to carry a significant portion of H2 revenue, the plan is structurally broken from day one.

3. Q4 Deals Closing in Q1

The Q4 pipeline that didn't close is not Q1 pipeline. This is one of the most persistent and damaging assumptions in revenue planning, and it is almost always wrong.

Deals that slipped from Q4 are already impaired. The buyer deprioritised them once — they'll deprioritise them again. The urgency that was supposed to drive Q4 close has passed. Budget cycles have reset. Champions have moved on, or lost internal momentum. Treating Q4 slippage as reliable Q1 inventory is how plans get built on pipeline that doesn't exist.

The data on this is consistent across organisations: Q4-to-Q1 conversion on slipped deals is materially lower than the headline close rate the model uses. In most cases it's 30 to 50% lower. Applying the standard close rate to slipped pipeline overstates Q1 coverage by a significant margin, and the plan is wrong before January starts.

The Difference Between a Plan and a Forecast

A plan is what you intend to happen. A forecast is what you think will happen. Most organisations conflate them — the plan becomes the forecast, which means the forecast is never honest because being honest about the forecast means admitting the plan is wrong.

This is a structural problem. When the plan and the forecast are the same document, the forecast carries a political weight it was never designed to carry. Forecasters know that showing a number below plan invites uncomfortable questions, so the forecast trends toward plan regardless of what the evidence actually suggests. The result is a forecast that lies, not because anyone is being dishonest, but because the incentives make honesty expensive.

Separating them is the first step to having a forecast that means something. The plan is the target. The forecast is the honest read of where you're actually going. They will usually diverge, and that divergence is the information you need to act on. A plan without an honest forecast is just a target with no accountability mechanism.

Most organisations know this in theory. Almost none of them do it in practice, because separating the forecast from the plan requires the board and the leadership team to be comfortable with the gap — and that comfort is hard to manufacture when the plan was approved with the expectation that it would be delivered.

What Stress Testing Actually Means

Stress testing a revenue plan does not mean running it against an average year. It means running it against your worst Q1 of the past three years. If the plan only works when everything goes well, it isn't a plan — it's an optimistic scenario dressed up in a spreadsheet.

Real stress testing asks the uncomfortable questions before the plan is filed rather than after it's broken. What happens if new hires take six months to ramp instead of three? What happens if close rates drop ten percentage points — not an unusual outcome when you're entering a new segment or launching a new product? What if Q4 pipeline converts at 60% of its historical rate? What if marketing delivers 20% less pipeline than the model requires?

A plan that survives those questions is a plan. It has been tested against realistic downside scenarios and the leadership team has made a conscious decision to accept the risk. One that hasn't been stress tested is a wish — and the discovery that it was always a wish just happens to arrive in February, when it's too late to do much about it.

The objection to stress testing is usually that it produces a more conservative plan, which is harder to defend to the board. That's true. But a plan that gets approved and then fails at the first review is harder to defend than a plan that was honest about its constraints from the outset. The board can handle a conservative plan. What it struggles with is repeated forecast misses that were entirely predictable.

The Resource and Dependency Map Nobody Builds

Every revenue plan has dependencies — things that have to happen for the plan to work. Headcount hired by a specific date. Product shipped by a specific date. Marketing pipeline delivered in sufficient volume by a specific date. Enablement programmes completed before reps are put in front of customers.

Most plans don't map these explicitly. They exist as implicit assumptions buried in the model, without owners, without milestones, and without any mechanism for tracking whether they're on schedule. Which means that when they slip — and they always slip — nobody notices until the gap has compounded into a structural problem.

The dependency map is not a complicated document. It's a list of the things the plan requires, the date by which they need to be in place, the person responsible, and the consequence if they're late. A plan without this map has no accountability infrastructure. It was designed to be approved, not managed.

The absence of a dependency map is also how organisations end up in the situation where the plan fails and nobody is responsible. The headcount wasn't hired on time, but that was an HR problem. The product didn't ship, but that was an engineering problem. Marketing didn't deliver the pipeline, but that was a demand generation problem. The plan failed because it had no owners for the things it required — and that is a planning problem.

THE FRAMEWORK

The full interrogation framework is Dispatch #005 — The Plan. 38 questions across four sections: Plan Autopsy, Assumption Stress Test, Resource and Dependency Mapping, and February Discipline. $97. Instant download.

See the full framework →

The February Review

By the time the February review happens, most plans are already behind. The question is what to do about it. There are two options: replan or manage to the existing plan. Most organisations choose neither — they acknowledge the gap, reforecast the quarter, and carry the plan forward unchanged. This is the worst option. It means the plan is wrong and everyone knows it, but decisions are still being made against it.

Headcount decisions, territory assignments, marketing spend, and sales enablement investment are all being made against a plan that nobody believes. The decisions are distorted because the reference point is distorted. And the forecast becomes a work of fiction, because the forecast is still being built against a plan target that stopped being achievable in week three of January.

The February review is the moment when the organisation decides whether it's serious about managing the year or just tracking it. Tracking is noting that you're behind. Managing is doing something about it — and that requires a plan that's honest about what happened and a decision about how to respond to it.

When to Replan Versus When to Hold

Replanning has a cost. It signals to the board that the original plan was wrong, which is politically uncomfortable. Boards don't like surprises in Q1, and a replan in February looks like a failure of planning discipline, even when it's actually a sign of management maturity.

But carrying a plan that nobody believes is also a cost — and it's a cost that compounds. A broken plan distorts resource allocation: you're over-investing in areas that can't deliver and under-investing in areas that could. It creates forecast theatre: every QBR becomes an exercise in explaining why the gap exists rather than a conversation about what to do about it. And it demoralises the team, who know the number is wrong and who are being held accountable to it anyway.

The question isn't whether to replan. It's whether the original assumptions were wrong in a recoverable way or a structural way.

Timing problems can be managed. If the Q1 miss is primarily driven by deals that slipped but remain qualified, a strong Q2 can recover the position. If the ramp issue is a two-month delay rather than a fundamental problem with rep productivity, you can model the impact and adjust expectations for H1 without abandoning the annual plan.

Structural problems require a replan. If close rates have dropped because the product has a competitive gap, more pipeline will not solve it. If the new market you entered in Q1 is taking longer to develop than modelled, Q2 and Q3 numbers built on that market don't work. If key hires fell through and the headcount the plan required won't be in place until Q3, the revenue those hires were supposed to generate in H1 doesn't exist. These are structural problems, and the honest response is a replan — not a more optimistic Q2 forecast.

February Discipline

The practical answer to all of this is a 30-day checkpoint built into the planning process from the start. Not a quarterly review — a monthly one, with a specific agenda: are the plan's assumptions holding? Which ones have already broken? What is the consequence for the full-year number?

By day 30 of the year, you know whether Q1's pipeline assumptions are holding. You can see the conversion rate on Q4 slippage. You can see whether new hires are tracking to the ramp assumption. You can see whether marketing is generating pipeline at the modelled rate. None of this requires waiting until the quarterly review — the data is there in week four if you know what you're looking for.

By day 45, you know whether Q1 is salvageable or whether you're managing a structural gap. That's early enough to do something about it. It's early enough to reallocate resources, to have an honest conversation with the board about the Q1 miss, to make a decision about whether to replan or manage. Waiting for the formal quarterly review puts you six weeks behind the problem.

A plan that has no 30-day checkpoint built in isn't designed to be managed. It was designed to be approved. And a plan designed to be approved rather than managed will, with near certainty, fail to survive February.

The assumptions are the plan. Not the revenue number — the assumptions underneath it. The revenue number is just an output. If the assumptions were never stress tested, if the dependencies were never mapped, if the Q4 slippage was treated as real Q1 inventory, then the plan was always going to fail. The only question was how long it would take for reality to make that visible.

It usually takes about six weeks.

DISPATCH #005

The Plan

38 questions that expose the assumptions your revenue plan was never asked before the board approved it. Plan Autopsy, Assumption Stress Test, Resource and Dependency Mapping, February Discipline Checklist. $97. Instant download.

Download the Framework — $97 Read Section 01 free →
Sales Kickoff Guide: Make the Energy Outlast February RevOps Metrics: The 12 Numbers That Actually Matter Sales Capacity Planning: How Many Reps Do You Need? What Sales Operations Actually Does When Done Right
Other Field Notes