◆   Field Dispatch #002 — Free Sample   ◆   Section 01 of 04   ◆
Dispatch #002  —  Professor Pipeline

Lead Quality Autopsy —
Section 01 of 04

The full dispatch contains 38 questions across four sections. What follows is Section 1 in its entirety — 10 questions, each with its mechanism, what it reveals, and the red flags that mean the handover was broken before it started.

Read it. If you recognise the problems, the remaining 28 questions are in the full dispatch.

Free sample — no email required

Lead Quality Autopsy

Run on every rejected lead. Run it before the argument starts. Run it when your instinct says someone is lying to themselves about what qualified means. Ten questions. Each one is a scalpel.

Q01 / Lead Quality Autopsy
What specific behaviour — not demographic fit — triggered this lead's qualification score?
Why it works

Scoring models conflate profile match with buying intent. A VP of Sales at a 200-person SaaS company who downloaded a whiteboard template is not a qualified lead. They are a person with a job title. The score tells you who they are. The behaviour tells you what they want.

What the answer reveals

If the answer is a job title, a company size, or a firmographic attribute, the lead was never qualified on intent. It was qualified on hope.

Red flags
  • "They fit our ICP perfectly"
  • "They downloaded our pricing guide" (once, six weeks ago)
  • "They're in the right industry"
  • Silence followed by a recitation of firmographic data
Q02 / Lead Quality Autopsy
What is the exact evidence that this person has authority to influence a purchase decision?
Why it works

A qualified lead is not a person who is interested. It is a person who is interested and has influence over whether a purchase happens. These are not the same thing. One downloads content. The other signs or influences the signature.

What the answer reveals

If there is no evidence of buying authority — not job title, but actual signal of procurement involvement — the lead may be a researcher, a curious employee, or someone benchmarking competitors. All useful to marketing. None of them are sales-ready.

Red flags
  • "Their title suggests they'd be involved"
  • "They're a Director, so probably"
  • No evidence of prior purchase involvement in the account
Q03 / Lead Quality Autopsy
When did they last engage, and what did they engage with — or is the score built on a single form fill from six weeks ago?
Why it works

Scores decay. Intent signals decay faster. A lead who engaged twice in one week three months ago is not the same lead today. The CRM does not know this. The score does not reflect this. The rep who calls them will find out immediately.

What the answer reveals

If the most recent engagement is more than 30 days old and was a single passive interaction, the lead is cold and has been handed over warm. That is a misrepresentation of buying intent.

Red flags
  • Last engagement: 45+ days ago
  • Single touch, single channel
  • "They've been in our nurture sequence" (which they haven't opened)
Q04 / Lead Quality Autopsy
What problem were they trying to solve when they converted, and how do we know?
Why it works

A lead without a problem is a contact. Sales cannot sell to a contact. They can only educate them. Education has a different close rate than problem-solving. The handover process that fails to capture the problem is handing over a name, not a lead.

What the answer reveals

If the answer is "we don't know" or "it's implied by the content they downloaded," the context transfer has already failed before the first call. The rep will spend the first ten minutes of discovery doing work that should have been done at qualification.

Red flags
  • "They downloaded our forecast accuracy guide so probably forecasting"
  • No form field capturing intent or use case
  • "We can ask them on the call"
Q05 / Lead Quality Autopsy
Has anyone spoken to this person, or has every interaction been automated?
Why it works

Automated nurture sequences tell you what content resonates. They do not tell you whether a human being is ready to have a commercial conversation. A lead that has only ever interacted with emails and forms has never been tested against a real qualifying question.

What the answer reveals

If no human has ever spoken to this lead, the qualification is theoretical. Every data point in the score is a proxy. The first call will be the first real qualification moment — which means the lead was handed over unqualified.

Red flags
  • Zero human touchpoints in the lead history
  • "They're fully self-serve in the funnel"
  • SDR contacted them once with no response and marked them qualified anyway
Q06 / Lead Quality Autopsy
What is the gap between their stated interest and the product they would actually need to buy?
Why it works

Leads qualify themselves into the top of the funnel based on content interest, not product fit. A person interested in "sales forecasting best practices" may need a coaching tool, a data warehouse, or a CRM replacement. If the gap between content interest and product fit is large, the lead is a research project, not a buying signal.

What the answer reveals

If the answer requires significant assumptions about how their interest maps to a specific product, the qualification criteria is doing too much interpretive work. The rep inherits those assumptions and is the first person who will test them against reality.

Red flags
  • "They're interested in the problem space"
  • Content consumed is top-of-funnel educational, not solution-oriented
  • No product-specific engagement in the lead history
Q07 / Lead Quality Autopsy
Who inside marketing signed off that this lead was sales-ready, and what criteria did they use?
Why it works

MQL status is often an automated threshold, not a human judgment. If no person made a deliberate decision that this lead was ready for a sales conversation, then no person is accountable when sales rejects it. The automation is accountable. The automation cannot have a conversation about it.

What the answer reveals

If the answer is "the system scored them over 80" with no human review, the handover process has no quality gate. The pipeline receives whatever the model produces, regardless of whether a practitioner would agree it belongs there.

Red flags
  • "The scoring model qualified them automatically"
  • No human review step in the MQL process
  • "Marketing ops set the threshold, not marketing"
Q08 / Lead Quality Autopsy
What would disqualify this lead — and has anyone checked?
Why it works

Qualification frameworks are almost always additive. Points are added for positive signals. Disqualifying signals are rarely subtracted with the same rigour. A lead that passes qualification on positive signals may fail on a single disqualifier that nobody looked for.

What the answer reveals

If disqualification criteria exist but haven't been checked, the qualification process is one-directional. It finds reasons to pass leads through. It does not find reasons to stop them.

Red flags
  • "We check for obvious ones like competitors"
  • No documented disqualification checklist
  • "Sales will figure it out on the call"
Q09 / Lead Quality Autopsy
How many leads from this same source were accepted in the last quarter, and what was their close rate?
Why it works

A single lead rejection is an anecdote. A pattern of rejections from the same source is evidence. If a content channel, campaign, or partner programme has a consistent rejection rate above 40%, the qualification criteria for that source is miscalibrated. The individual lead is not the problem. The source is the problem.

What the answer reveals

If nobody has looked at source-level close rates, the handover quality conversation is operating without data. Both teams are arguing about anecdotes.

Red flags
  • "We don't track close rate by source"
  • Source close rate is below 10% but volume is high
  • "That campaign always generates good leads" (based on volume, not conversion)
Q10 / Lead Quality Autopsy
If this lead came in through sales outbound instead of marketing inbound, would we call it qualified?
Why it works

This is the mirror test. Sales applies a different standard to leads they generate than to leads they receive. A prospect who responds to a cold outbound sequence gets a first call. A prospect who downloads a whiteboard template gets a first call too — but one of them was chosen deliberately, and one was routed automatically.

What the answer reveals

If the answer is "no" — if sales would not have chosen this person as an outbound target — then the inbound qualification standard is lower than the outbound standard. That gap is where the argument lives.

Red flags
  • Silence
  • "That's a different process"
  • "Inbound shows intent, outbound doesn't" (when the intent signal is one form fill)
End of free sample — Section 01 of 04

The remaining 28 questions are in the full dispatch.

Three more sections. A handover health scan. An alignment reality check. A Monday morning operating rhythm. A scoring rubric. Four printable worksheets.

02 Handover Health Scan
03 Alignment Reality Check
04 Monday Morning Execution
Get the Full Dispatch — $97

Instant download. No subscription. No upsell.