The forecast is the most important conversation in a revenue organisation and the least honest one. Everyone in the room knows the number is wrong. The VP knows it. The AEs know it. The CEO suspects it. Nobody says it out loud because saying it out loud creates a problem nobody wants to own.
So the forecast goes to the board at £4.2M. The quarter closes at £3.1M. The post-mortem attributes the gap to "macro headwinds" and "deals slipping." Nobody examines the process that produced the wrong number. Next quarter, the same process produces another wrong number.
This is the forecast loop. It repeats because the post-mortem asks the wrong question. It focuses on which deals fell out rather than why the process produced a number that was never real. Fix the post-mortem, and you start to fix the forecast.
But first, you need to understand what's actually breaking it.
Why Forecasts Are Wrong
Most people blame CRM data. They're not wrong — but data quality is a downstream problem. It's a symptom of three distinct upstream failures, each of which requires a different fix.
1. The Social Dynamics of Forecast Reviews
Nobody wants to be the person who says the number is bad. AEs who submit conservative forecasts get interrogated — "why so low?", "what are you doing about it?", "where's the pipeline?" AEs who submit high forecasts get left alone. The conversation ends. The meeting moves on.
The incentive structure is plain: optimism is rewarded with silence; realism is rewarded with scrutiny. Learning this once is enough. Every AE in the organisation adjusts their behaviour accordingly.
There's a second behavioural pattern that makes this worse: sandbagging. Some reps forecast so conservatively that the actual close always looks like a pleasant surprise. This produces a different kind of useless forecast — one where the number is technically accurate but systematically understates capacity, making resource planning impossible.
Both patterns — optimistic forecasting and systematic sandbagging — produce forecasts that don't reflect reality. And both are rational responses to the environment the organisation has created. The problem isn't dishonesty. It's incentive design.
2. The CRM Hygiene Problem
Deals in the wrong stage. Close dates that have slipped three times without being questioned. Probability fields nobody updates because they're meaningless percentages that the CRM auto-populates based on stage rather than anything a human has assessed.
The forecast is built on this data. Which means the forecast reflects the CRM, not the pipeline. These are not the same thing.
CRM hygiene deteriorates because nobody is held responsible for it. Updating stage and close date feels like administrative overhead to a rep who is focused on selling. Managers don't enforce it because the enforcement conversation is uncomfortable and doesn't feel directly connected to revenue. So the data sits, stale, building false confidence.
The fix is not a CRM training session. It's a process that makes accurate data entry the path of least resistance, not the path of most friction.
3. Incentive Misalignment Across Levels
Even when a rep tries to forecast honestly, the number often doesn't survive the management layer intact. Managers who have learned that conservative rep forecasts are usually understated apply an uplift. They've seen enough quarters where the team beat its commit to develop a habit of adding optimism to every level of the funnel.
Reps observe this and respond: if the manager is going to uplift my number anyway, I'll submit something lower. This dynamic compounds through the organisation — each layer adds a buffer, and by the time the number reaches the board, it bears no relationship to what any individual contributor believes will close.
The result is a forecast that is the product of negotiation, not estimation. That's a fundamental problem, and it cannot be solved by improving data quality alone.
The Difference Between a Forecast and a Target
A forecast is what you think will happen. A target is what you want to happen. They are different numbers. In most organisations, they are treated as the same number — or the forecast is implicitly expected to match the target.
When the forecast is also the target, it can never be honest. The person submitting the forecast is not going to submit a number below target unless they want a very uncomfortable conversation. So they don't. They submit something that makes the meeting manageable, not something they actually believe.
This is not a failure of individual integrity. It's a structural failure. The organisation has created a situation where honest forecasting is professionally costly and optimistic forecasting is professionally safe.
Separating forecast from target — making it genuinely safe to submit a number below target — is the structural change that makes honest forecasting possible. This means the manager's response to a below-target forecast cannot be interrogation and pressure. It has to be problem-solving: what's in the way, what needs to change, what can we do. The moment that conversation becomes punitive, the next forecast goes back up.
Safe-to-be-honest is a leadership condition. You cannot enforce it with a process. But you can create the conditions for it by being consistent about what happens when someone brings a bad number.
The Pipeline Coverage Myth
Most revenue organisations use pipeline coverage as a proxy for forecast confidence. If you have 3× coverage, the theory goes, even with normal attrition you'll hit the number. It's a simple metric that makes leadership feel informed.
The problem is that 3× coverage means nothing if the 3× is full of deals that won't close. Coverage is a quantity metric applied to a quality problem.
An organisation with 3× coverage built from highly qualified, late-stage deals with active champions and defined next steps is in a fundamentally different position to one with 3× coverage built from first calls, cold email replies, and deals that haven't had a customer interaction in six weeks. The number looks the same in the dashboard. The underlying reality is completely different.
Until you can assess the quality of pipeline — not just the volume — coverage is a comfort, not a forecast input. The questions that matter are not "how much pipeline do we have?" but "how much of this pipeline is real?", "how many of these deals have an active decision-maker engaged?", and "how many have a plausible close path in the quarter?"
Those questions require a conversation. They cannot be answered by a dashboard metric. Which is why most organisations don't ask them.
Forecast Methodology — Why None of Them Work Alone
There are three main approaches to sales forecasting, and each has a specific failure mode.
Bottom-Up: Rep Commits
The rep commits a number based on what they believe will close. This is accurate in proportion to the rep's honesty, which is in proportion to the safety of being honest. In most organisations, that safety doesn't exist — so the bottom-up number is inflated at source, before the management layer adds its own uplift.
Bottom-up forecasting is the right starting point because it is closest to the ground truth. But it requires a culture where honest submission is rewarded, not penalised. Without that culture, the number is fiction before it leaves the rep's screen.
Top-Down: VP Adjusts
The VP applies experience-based adjustments — adding optimism when the team is typically sandbagging, pulling back when a particular rep always overstates. This adds a useful layer of pattern recognition, but it is rarely calibrated against data. It is intuition dressed up as oversight.
Top-down adjustment is most useful as a sanity check against the bottom-up number, not as a primary forecasting method. When it becomes the primary method, it bypasses the ground-level data entirely and replaces it with management gut feel. That gut feel is sometimes right. It is not a system.
Statistical: Based on Historical Patterns
Statistical forecasting looks at historical conversion rates, stage velocity, and deal size to generate a probability-weighted forecast. When the underlying data is clean and the business model is stable, this is the most accurate approach.
But statistical forecasting is least useful exactly when you most need accuracy: when the business is changing. New product, new market, new team, new pricing model — historical patterns do not transfer cleanly to new conditions. Most businesses are changing most of the time. That limits the reliability of purely statistical approaches.
The Reconciliation Conversation
A forecast process that uses all three methods — and reconciles the differences explicitly — is more reliable than any single method. Where the bottom-up and statistical numbers diverge, that's a question. Where the VP's top-down adjustment diverges from both, that's another question. The reconciliation conversation is where the honesty lives.
If the bottom-up says £3.2M, the statistical model says £2.8M, and the VP submits £3.9M, those three numbers are telling you something. The right response is not to split the difference. It's to ask why they're different and which assumptions are wrong.
THE FRAMEWORK
The full interrogation framework is Dispatch #001 — The Forecast. 38 questions across four sections that expose what the forecast is hiding. $97. Instant download.
See the full framework →The Conversation That Produces Honest Forecasts
What it looks like when the VP is willing to hear a bad number: they ask questions rather than express disappointment. "What's blocking this deal?" not "why is this so low?" They help solve the problem rather than assign blame. They treat the commit as information, not as a personal statement about the rep's competence or effort.
Critically, they don't uplift conservative forecasts automatically. They've built a culture where the number submitted is the number believed, because they've been consistent about what happens when someone brings a bad number — which is a calm, problem-solving conversation, not a performance review.
This is a leadership condition, not a process condition. You can implement the best forecast methodology in the world — bottom-up plus statistical reconciliation, clean CRM data, weekly pipeline reviews — and it will produce dishonest numbers if the culture punishes honesty. The methodology is the easy part. The culture is the hard part.
The test is simple: what happens when a rep commits £400k and closes £350k? If the answer involves a difficult conversation about missing target, you have a problem. The commit was accurate. The rep did what a good forecasting culture requires. Penalising accurate commits that come in below target destroys the forecast. Every rep learns from that outcome.
Call Stage Hygiene
Deals get stuck in "Proposal Sent" for 90 days because nobody wants to mark them lost. The AE still believes — or prefers to believe — the deal is alive. The manager doesn't want to have the conversation. So the deal sits in the pipeline generating false coverage, inflating the forecast, and preventing an honest assessment of what's actually closable.
Stage hygiene degrades slowly and almost invisibly. A deal slips from "closing this quarter" to "probably next quarter" to "we're still nurturing the relationship" to a CRM record that hasn't been touched in four months but is still technically in stage four.
The fix is a stage exit criterion: a defined condition that must be true for a deal to remain in each stage. Not a field the rep ticks. A condition the manager verifies. "This deal is in stage four" should mean: the customer has seen a proposal in the last 30 days and has confirmed the timeline. If that isn't true, the deal moves back.
If a deal hasn't had a customer interaction in 30 days, it's not in the stage it claims to be in. That's not a data quality problem. It's a conversation that hasn't happened yet — and the forecast is hiding the fact that it needs to happen.
The Commit Conversation
A rep commit should mean something. "I'm committing this deal" should mean: I have high confidence this will close this quarter, I know the next step, I know who's making the decision, I know what the commercial terms look like, and I know what could stop it. It should require the rep to articulate the risk, not just assert the outcome.
In most organisations, a commit means "this is in my best case." That's not a commit. It's an aspiration with a label on it.
Building a genuine commit culture requires two things. First, reward accurate commits — not just high ones. A rep who commits £500k and closes £490k has done something valuable. Treat it as such. Second, never penalise accurate commits that turn out to be below target. The moment that happens, the next commit goes up. And the one after that. And the forecast becomes useless again.
The forecast is not a performance review. It's information. Treat it as information and you'll get better information. Treat it as a judgement and you'll get numbers that are designed to survive a judgement — which is to say, numbers that tell you nothing.
Every forecast that goes unchallenged teaches the organisation that the process doesn't matter. Every forecast that gets examined honestly teaches the opposite.
The gap between £4.2M and £3.1M is not a macro problem. It's a conversation problem. The forecast that went to the board was produced by a process that makes dishonesty rational and honesty costly. Until you change the process — and the culture that the process reflects — the gap will keep recurring, and the post-mortem will keep finding new ways to describe the same failure.