◆   Field Dispatch #005 — Free Sample   ◆   Section 01 of 04   ◆
Dispatch #005  —  Professor Pipeline

Plan Autopsy —
Section 01 of 04

The full dispatch contains 38 questions across four sections. What follows is Section 1 in its entirety — 10 questions, each with its mechanism, what it reveals, and the red flags that mean the plan is already failing.

Read it. If you recognise February in these questions, the remaining 28 are in the full dispatch.

Free sample — no email required

Plan Autopsy

Run in the first week of February. After the SKO energy fades. Before the quarter is too far gone to act. Run it when nobody wants to say what January already proved. Ten questions. Each one is a scalpel.

Q01 / Plan Autopsy
What three assumptions does this plan depend on most — and which one has already been tested?
Why it works

Every revenue plan rests on a small number of load-bearing assumptions. If those assumptions aren't named explicitly — and tested against real data — the plan is theoretical at best and delusional at worst. The first week of February is the earliest point at which real data exists to run the test.

What the answer reveals

If a revenue leader cannot immediately name the three load-bearing assumptions in the plan — the ones the entire model sits on — the plan was never stress-tested. It was assembled. And if none of those assumptions has been tested against real data from the first 38 days, the plan is still theoretical.

Red flags
  • "The plan is based on a lot of factors"
  • Assumptions buried in footnotes rather than stated on slide one
  • "We'll know more at the end of Q1"
  • Silence, followed by a request to reschedule
Q02 / Plan Autopsy
What was forecast for January — and what actually happened?
Why it works

January is the plan's first contact with reality. It is not a warm-up. It is not a baseline-setting exercise. It is the first test of whether the assumptions embedded in the plan hold when they meet customers, reps, and markets that don't know they're supposed to behave as modelled.

What the answer reveals

January is not a grace period. It is the first data point. If actual performance diverged from forecast and nobody has documented why, the divergence will repeat. The plan will manage around the gap rather than close it.

Red flags
  • "January is always slow"
  • No documented comparison of forecast vs actual
  • "We're tracking the right activities"
  • Variance attributed to seasonality with no historical evidence
Q03 / Plan Autopsy
Which strategic initiatives from the deck are still active — and which ones have been quietly abandoned?
Why it works

SKO decks announce initiatives. February tests whether those initiatives had the resources, the ownership, and the urgency to survive first contact with the operational calendar. Most don't. The ones that die quietly are the most dangerous because the plan still assumes their output.

What the answer reveals

Strategic initiatives die quietly. Nobody announces the abandonment. Resources get redirected without a decision. If the answer to this question takes more than 60 seconds to produce, the plan is already running on a different set of priorities than the one presented at the SKO.

Red flags
  • "That one is being reviewed"
  • Initiative owner has changed since January
  • No update on initiative in any meeting since the SKO
  • "We're still working on the approach"
Q04 / Plan Autopsy
Where is the plan already behind — and what is the documented response?
Why it works

No plan survives first contact whole. The question is not whether the plan is behind somewhere — it is whether the team has acknowledged where it is behind and documented a specific response. The absence of a documented response means the plan is being managed by optimism, not by action.

What the answer reveals

Every plan is behind somewhere by February. The question is whether that lag has a documented response or whether it is being managed by optimism about Q2. A plan without a documented response to its first failures is not a plan. It is a wish.

Red flags
  • "We're monitoring it closely"
  • No written corrective action for any area of underperformance
  • "Q2 will make up for it"
  • The same gap existed at this point last year
Q05 / Plan Autopsy
What has the market done since the plan was written that the plan didn't anticipate?
Why it works

Plans are written in a moment. Revenue plans written in October or November are already three to four months old by February. In that time, competitors have moved, buyers have shifted, economic conditions have changed, and some of the markets the plan was built on have behaved differently than the model assumed.

What the answer reveals

Plans are written in a moment. Markets move continuously. If the competitive landscape, buyer behaviour, or economic conditions have shifted since November and the plan hasn't been updated to reflect that, the plan is operating on stale intelligence.

Red flags
  • "The market is fine"
  • No competitive intelligence update since planning season
  • "Buyers are taking longer to decide but our numbers are the same"
  • A major competitor has changed pricing, product, or go-to-market since November
Q06 / Plan Autopsy
What did the SKO assume about rep capability that February has already challenged?
Why it works

Revenue plans contain embedded assumptions about rep capability — ramp time, conversion rates, average deal size, activity volume — that are typically derived from historical averages or aspirational targets. February is the first month in which new hires are being tested, new products are being sold, and new markets are being worked. The gap between assumption and reality shows up here first.

What the answer reveals

SKOs build plans on the assumption that reps will ramp, convert, and execute at modelled rates. February is the first test of whether those assumptions hold. If ramp is slower, conversion is lower, or execution is inconsistent, the capacity model that underpins the plan is already wrong.

Red flags
  • Ramp targets unchanged despite Q4 underperformance
  • "The new reps are still getting up to speed"
  • Activity metrics tracking, outcome metrics not
  • Conversion assumptions based on last year's top performer cohort
Q07 / Plan Autopsy
What dependencies on other functions (marketing, product, CS) were built into the plan — and which ones have already slipped?
Why it works

Revenue plans are written as if revenue is the only function in the company. In practice, revenue targets depend on marketing pipeline that hasn't been created yet, product releases that haven't shipped, and customer success capacity that is already stretched. The moment any of those dependencies slips, the revenue plan slips with it — whether or not anyone has acknowledged the connection.

What the answer reveals

Revenue plans almost always depend on marketing pipeline, product launches, and customer success capacity that other teams own. If those dependencies have slipped and the revenue plan hasn't been updated to reflect the delay, the plan is projecting outcomes from inputs that no longer exist on the same timeline.

Red flags
  • Marketing pipeline below plan with no revenue plan adjustment
  • Product launch delayed with revenue impact unquantified
  • "We're in conversations with CS about capacity"
  • Dependencies listed in the plan with no named owner or SLA
Q08 / Plan Autopsy
Who owns accountability for the plan — and have they been in the same room this month to review it?
Why it works

Plans without a regular, named, cross-functional review cadence become orphans. Accountability diffuses. Each function manages to its own metrics rather than to the shared plan. By February, the plan exists as a document that everyone references but nobody owns.

What the answer reveals

Plans without a named owner and a regular review cadence drift. If the people accountable for the plan haven't reviewed it together since the SKO, the plan is being executed but not managed. Those are different things.

Red flags
  • No joint plan review scheduled since the SKO
  • "The CRO owns it" with no cross-functional accountability
  • Each function reporting to the plan independently with no reconciliation
  • Last plan review was a slide deck, not a structured interrogation
Q09 / Plan Autopsy
What does hitting plan require in Q2 that isn't currently in place?
Why it works

Revenue leaders who are managing a Q1 gap almost always answer it by pointing to Q2. Q2 will recover. Q2 is bigger. Q2 has a new product, new pipeline, new hires. The question is whether the inputs that Q2 requires are currently being built. If they aren't, Q2 is a projection with no foundation.

What the answer reveals

Q2 outcomes are determined by Q1 inputs. If the pipeline, the hiring, the product readiness, or the market development required to hit Q2 numbers isn't already underway, the plan for Q2 is a projection with no foundation. The time to address that is in February, not April.

Red flags
  • Q2 pipeline coverage below 3x with no creation plan
  • Headcount required for Q2 not yet hired or onboarded
  • "We'll build the pipeline in Q1" — in Q1's fifth week
  • Q2 plan assumes a product launch with no confirmed ship date
Q10 / Plan Autopsy
If you redesigned the plan today with what you know now, what would change?
Why it works

This question is the most dangerous one in the framework because it requires intellectual honesty from people who have staked professional credibility on the plan being right. It also produces the most valuable output: an honest picture of what the business knows but hasn't yet said in a room where it matters.

What the answer reveals

This is the most important question. If the honest answer is "nothing" — the plan is either genuinely robust or nobody is willing to say what they know. If the honest answer surfaces material changes, those changes need to happen now. The longer a plan operates on assumptions that the team knows are wrong, the harder the eventual correction becomes.

Red flags
  • "Nothing, the plan is solid" — with no evidence
  • The answer changes when posed privately vs in the room
  • Everyone looks at the CRO before answering
  • The answer requires a follow-up meeting to produce
End of free sample — Section 01 of 04

The remaining 28 questions are in the full dispatch.

Three more sections. An assumption stress test. A resource and dependency audit. A February discipline protocol. A scoring rubric. Four printable worksheets.

02 Assumption Stress Test
03 Resource & Dependency Audit
04 February Discipline Protocol
Get the Full Dispatch — $97

Instant download. No subscription. No upsell.