◆   FIELD DISPATCH SERIES — REVENUE OPERATIONS   ◆   DOWNLOADED, NOT HIRED   ◆

Why the Handover Between Sales and Marketing Keeps Failing

The MQL is the most argued-about metric in a revenue organisation and possibly the least useful one. Marketing optimises for MQL volume because that's what they're measured on. Sales ignores MQLs because most of them aren't ready to buy. Both sides are right. Both sides are wrong. The handover is where the argument lives, and nobody has fixed it because fixing it requires both sides to give up a metric they like.

The result is a system where marketing produces leads that sales doesn't follow up, and sales complains about lead quality while marketing complains about follow-up rates. Both are accurate observations. Neither is a solution.

This dysfunction is widespread, well-documented, and almost universally ignored in favour of adding another SLA policy or buying another piece of marketing automation software. Neither intervention addresses the root cause. The root cause is structural: the two teams are optimising for different outcomes and measuring different things, and nobody has forced them to share accountability for a single number.

The MQL Definition Problem

What counts as an MQL is usually a negotiation — resolved in favour of whoever has more political capital, not in favour of what actually predicts purchase intent.

In most organisations, the MQL threshold is set low enough for marketing to generate the volume the plan requires. This is rational behaviour. Marketing leaders have pipeline contribution targets. Pipeline contribution is calculated from MQL volume. Setting the MQL threshold high reduces volume. Volume reduction makes the contribution target harder to hit. So the threshold stays low.

The result is a large number of MQLs, most of which have no buying intent. They downloaded a whitepaper. They attended a webinar. They visited the pricing page once after clicking an ad. These are engagement signals, not intent signals. The difference matters enormously.

Engagement signals tell you a prospect found your content interesting enough to interact with. They say nothing about whether that prospect has a problem your product solves, a budget allocated to solving it, or a timeline to act. A CFO who reads three thought-leadership pieces and attends your annual webinar may be a genuine intellectual admirer. That admiration does not make them a buyer.

Intent signals are different. Return visits to pricing pages. Searches on comparison sites. Direct inquiries to the sales team. Conversations initiated at conferences. These behaviours indicate a prospect is actively evaluating a purchase, not merely consuming content. The gap between the two is the gap between a lead that converts and one that wastes an AE's afternoon.

The fix is not a higher MQL score threshold, though that helps. The fix is deriving the threshold from actual closed-won data — looking back at the leads that became deals and identifying what signals they exhibited at the point of handover. That analysis is almost never done. When it is done, the MQL definition it produces usually looks very different from the one that's been in operation for the last two years.

Why Sales Ignores MQLs

The first few months of following up on bad MQLs teaches AEs that the effort isn't worth it.

An AE who follows up on fifty MQLs and connects with thirty, books meetings with eight, and converts two into opportunities learns a conversion rate of 4%. They also learn that the twenty-two people who connected but didn't book a meeting were not interested in buying anything, and that many of the fifty had no idea why they were being called. After that experience, the pattern is set — not because the AE is lazy, but because they've correctly updated their model of lead quality.

Marketing's response to low follow-up rates is almost always to add process. Stricter SLAs. Automated reminders. Escalation paths when MQLs sit untouched. These interventions address the symptom — a slow response — rather than the cause, which is that the response feels futile. You cannot shame an AE into following up on leads that don't go anywhere. You can only make following up worth their time by improving the quality of what you're sending them.

Breaking the pattern requires fixing the root cause: lead quality, not follow-up compliance. The compliance problem is a symptom of a trust problem. Sales has lost faith in the MQL. Restoring that trust requires demonstrating, with actual data, that the leads being sent are worth the effort.

The Blame Game and Why It Persists

Marketing says sales doesn't follow up. Sales says the leads are bad. Both are often true simultaneously. The reason the argument persists rather than being resolved is that resolving it requires an audit — and the audit creates accountability for whoever loses the argument.

If you run the analysis and find that marketing's MQLs have a 2% pipeline conversion rate, marketing has a problem to explain. If you run the analysis and find that 40% of the MQLs that sales abandoned later became deals with competitors, sales has a problem to explain. Most of the time, both things are partly true. The analysis would show marketing's threshold is too low and that sales is ignoring leads it should be working.

That dual finding is actually useful. It gives both teams something to fix. But it requires both teams to be willing to look at the data together, which requires a level of inter-departmental trust that organisations where this argument is active rarely have.

The data would resolve the argument — which is exactly why nobody looks at it. Both sides prefer the argument to the answer. The argument allows each team to point at the other. The answer requires both teams to accept partial responsibility and change their behaviour.

ICP Alignment — The Deeper Problem

Beneath the MQL argument is a problem that rarely gets named explicitly: the ICP that marketing is targeting is often not the same ICP that sales is closing.

This divergence develops gradually and without anyone intending it. Marketing builds content and campaigns for the buyer persona they believe is most likely to engage — typically based on existing research, job titles that respond to ads, or whoever the last CMO thought the buyer was. Sales closes deals with whoever will buy, which may be a different function, a different company size, a different vertical, or a different set of pains. Over time, the two populations drift apart.

When this divergence exists, no handover process will fix it. Marketing is filling the top of the funnel with people who don't match what sales can close. The SDR is calling people who are interested in the content but not in the product. The AE is getting meetings with prospects who aren't the decision-maker, don't have budget, or are solving a slightly different problem than the one your product addresses.

The only solution is a shared ICP derived from closed-won data. Not from personas. Not from market research. From the actual deals that closed — what did those customers have in common? What industry, function, company stage, problem statement? What did the champion look like? What was the trigger that started the evaluation?

This analysis should be reviewed quarterly and owned jointly by sales and marketing. When the ICP is owned by marketing alone, it drifts toward whoever engages with content. When it's owned by sales alone, it drifts toward whoever happened to buy recently. When it's owned jointly and derived from data, it stays anchored to reality.

Lead Scoring — What It Measures and What It Doesn't

Most lead scoring models assign points for email opens, page views, content downloads, and webinar attendance. These are engagement signals. They measure whether a prospect is interested in your content. They do not measure whether a prospect has a problem your product solves, a budget to solve it, and a timeline to act.

A prospect who read five blog posts and opened three emails has demonstrated interest in your thinking. That's a different thing from intent to buy. The lead score looks identical from the outside — both hit the MQL threshold — but the conversion probability is radically different.

Lead scoring that conflates engagement with intent produces scores that look good in a marketing dashboard and convert poorly in sales. The marketing team can show a healthy MQL volume and a well-performing nurture programme. The sales team can show a conversion rate that doesn't justify the follow-up effort. Both sets of data are accurate. They're measuring different things.

Better scoring models incorporate negative signals — job titles outside the buying committee, company sizes outside the ICP, geographies outside the serviceable market. They weight intent signals more heavily than engagement signals: direct product page visits, pricing inquiries, trial sign-ups. They also incorporate firmographic fit as a qualifying layer rather than a bonus — a prospect who opened six emails but works at a ten-person company below the minimum viable deal size should not hit the MQL threshold regardless of their engagement score.

THE FRAMEWORK

The full interrogation framework is Dispatch #002 — The Handover. 38 questions across four sections that expose what's breaking between marketing and sales. $97. Instant download.

See the full framework →

SLA Mechanics and Why They Don't Get Hit

The 5-minute call-back SLA for inbound MQLs exists because the data is unambiguous: conversion rates drop sharply after the first five minutes of delay, and by the time a lead is an hour old the probability of connecting is a fraction of what it was at the point of submission.

Most organisations have an SLA. Most don't hit it consistently. The gap between policy and practice is treated as a discipline problem — solve it with reports, reminders, and manager escalations. This misunderstands what's happening. The SDRs and AEs who aren't hitting the SLA are not failing to understand its importance. They have correctly assessed that the leads coming in are not worth dropping everything to call in the next five minutes.

An SLA on a bad lead is urgency applied to a waste of time. The five-minute response is worth enforcing when the lead quality justifies it. When an inbound prospect has the right company profile, the right job title, has visited the pricing page twice, and submitted a direct inquiry form — that lead is worth calling immediately. When an inbound prospect downloaded a whitepaper three weeks ago, attended a webinar last Tuesday, and has a job title that's never bought your product — the five-minute SLA is a policy that trains the sales team to be cynical about all inbound, including the good ones.

Fixing SLA compliance requires fixing lead quality first. Then the SLA becomes a reasonable expectation, and non-compliance is actually a behaviour problem rather than a rational response to a broken process.

What a Working Handover Requires

A working handover is not a new tool. It is not a tighter SLA policy. It is not another workshop where both teams agree to collaborate better and then return to their respective dashboards and optimise against different metrics.

A working handover requires four things to be true simultaneously:

First, an agreed MQL definition based on actual conversion data. Not a definition that marketing and sales negotiated in a room last year. A definition derived from looking at the leads that became deals over the last twelve months and identifying what they had in common at the point of handover. What firmographic profile? What behavioural signals? What level of engagement? That analysis should produce a threshold that filters out the noise and passes through the signal.

Second, an SDR triage process that distinguishes genuine intent signals from engagement signals before leads are passed to AEs. The SDR's job in this model is not to call every MQL immediately. It is to review each MQL against the agreed criteria, make a qualification judgement, and either call with urgency when the profile warrants it or route back to nurture when it doesn't. This requires SDRs who understand the ICP, understand what intent looks like, and have the authority to make that call without being penalised for not attempting every contact.

Third, a feedback loop from sales back to marketing. Every closed-lost deal that came from a marketing-sourced lead should generate a reason code. Every MQL that an SDR declined to call should generate a note on why. These signals feed back into the ICP definition, the lead scoring model, and the content strategy. Without this loop, marketing is flying blind — generating leads based on assumptions about what the market looks like, with no corrective signal from what actually converted.

Fourth, a shared accountability metric that both teams own. Not MQL volume, which is a marketing-only metric. Not pipeline created, which is a sales-only metric. Something in between — qualified pipeline generated from marketing-sourced leads, measured at a stage that requires sales engagement to reach. A metric that marketing can only hit if the leads they generate are genuinely qualified, and that sales can only hit if they actually work the leads they receive. Neither team can game it without the other's cooperation.

The Attribution Argument

Marketing and sales will disagree about attribution — which deals did marketing source, which did sales source, which are legitimately shared. This argument is almost always a proxy for a budget argument: whoever gets credit for revenue gets to defend their headcount in the next planning cycle.

The attribution model is a political construct. First-touch attribution rewards marketing for generating awareness, which is real and valuable but not sufficient. Last-touch attribution rewards whoever was in the room when the deal closed, which can entirely ignore the four months of marketing engagement that put the prospect in that room. Multi-touch attribution distributes credit across every interaction, which is methodologically defensible and operationally complex to the point of being difficult to trust.

None of these models perfectly describes how deals actually come together. The question that matters is not who gets credit for which deal — it is whether the funnel from first awareness to closed deal is working efficiently. That requires end-to-end visibility into conversion rates at each stage: from lead to MQL, from MQL to SQL, from SQL to opportunity, from opportunity to close. When those conversion rates are visible and both teams own them, the attribution argument becomes secondary. Both teams can see where the funnel is leaking. Both teams can own the fix.

That visibility is only possible when marketing and sales share a single data set and report against it jointly. In most organisations, they don't. Marketing reports on its own metrics. Sales reports on its own metrics. The handover sits in the gap between the two dashboards, invisible to both.

The handover is where the revenue organisation breaks. It breaks because the two teams who own it have been measured against incompatible targets for so long that neither can see the whole picture.

Fixing the handover doesn't require trust as a starting point. It requires a shared metric as a structural forcing function — something both teams have to move together, so that the incentives finally align. Everything else follows from that.

DISPATCH #002

The Handover

38 questions that expose what's breaking between sales and marketing — the MQL definition nobody agreed on, the leads that were never qualified, the feedback loop that was never built. $97. Instant download.

Download the Framework — $97 Read Section 01 free →
MQL to SQL: Why Your Conversion Rate Is Lying to You How to Build an ICP That Sales Will Actually Use SDR Productivity: What to Measure and What to Stop Win/Loss Analysis: Run One That Actually Changes Behaviour
Other Field Notes