◆   Field Dispatch Series — Revenue Operations   ◆   Downloaded, Not Hired   ◆

What Sales Operations Actually Does When Done Right

Most sales ops teams are buried. They spend their days pulling reports nobody reads, fixing Salesforce fields that broke because a rep clicked the wrong thing, and fielding requests from the VP of Sales who needs a slide by 3pm. They are not running the revenue engine. They are cleaning up after it. That is not sales operations done right. That is sales operations done cheap.

The difference between a reactive sales ops team and a strategic one is not budget, headcount, or tooling. It is mandate. Strategic sales ops teams own the infrastructure of revenue: the systems, the data, the process, and the insight layer that sits on top of it all. Reactive teams own the inbox. One version makes the revenue number more predictable. The other makes the VP of Sales feel slightly less overwhelmed. Know which one you have.

The Four Domains Sales Ops Should Own

Sales ops is not a catch-all support function. It has a defined scope, and when that scope is understood and respected, it produces compounding value. When it is not, the team becomes an expensive version of admin support. The four core domains are systems, process, data, and commercial insight.

Systems

Sales ops owns the technology stack that salespeople operate in every day. That means CRM architecture, workflow automation, forecasting tools, territory management platforms, and the integrations between them. Ownership here means more than keeping the lights on. It means making deliberate decisions about what fields exist, how data flows between systems, what gets measured automatically versus manually, and what the tech stack will look like in 18 months. A sales ops team that inherits a CRM it did not design is already playing catch-up. System design is strategy. It determines what information is available, how trustworthy it is, and how fast leadership can act on it.

Process

Every stage of the sales process — from lead routing through to close and handoff — should be documented, enforced, and periodically reviewed by sales ops. Not suggested. Not loosely followed. Enforced, with automation and consequence where possible. This includes lead assignment logic, stage entry and exit criteria, approval workflows for discounting, and the handover process between sales and customer success. When sales ops does not own process, process becomes whatever each rep decides it is. And then nobody can diagnose why the win rate is declining.

Data

Sales ops is responsible for the quality, completeness, and accessibility of commercial data. This is the domain where most teams fail quietly. CRM hygiene degrades gradually. Nobody declares a data emergency when 30% of contacts are missing a company size field. But that decay accumulates, and six months later the segmentation analysis is garbage, the territory plan is built on incomplete account data, and the board deck forecast has a confidence interval wide enough to drive a truck through. Sales ops must own data governance: what gets captured, who is accountable for capturing it, and how quality is monitored over time.

Commercial Insight

This is the domain most reactive sales ops teams never reach. Commercial insight means taking the systems, process, and data you own and producing analysis that changes decisions. Win rate by segment. Sales cycle length by deal size and buyer persona. Pipeline coverage by rep versus historical conversion rates. Quota attainment distribution. These are not reports. They are levers. A strategic sales ops team does not wait to be asked for analysis. It identifies the questions leadership should be asking and answers them before the QBR. If your sales ops team only produces what has been requested, you do not have a sales ops team. You have a reporting function.

What Sales Ops Should Not Own

Scope creep is real and it destroys effectiveness. Sales ops should not own revenue strategy, product positioning, hiring decisions, or the management of individual reps. These responsibilities belong to sales leadership. The moment sales ops becomes a substitute for management, the team loses the neutrality that makes its analysis credible. You cannot trust a forecast produced by a team that also has a personal stake in the number looking good.

Sales ops should also not own the customer success stack or the marketing automation platform unless the organisation has explicitly built a RevOps function that consolidates these. In a pure sales ops model, the boundary stops at the moment a deal closes. Post-close is customer success territory. Marketing-generated pipeline is a handoff point, not a management responsibility. Clarity on these boundaries is not bureaucratic. It is what makes accountability possible. If you are thinking through whether to build sales ops or a full RevOps function, the structural differences matter — read the breakdown at RevOps vs Sales Ops: What Is the Difference?

The Reactive vs Strategic Divide

You can diagnose the maturity of a sales ops function in one conversation. Ask the team lead: "What is the most important insight you have surfaced in the last 90 days that leadership did not ask for?" If the answer is blank, or if it is a description of a dashboard they built, you have a reactive team. If the answer is a concrete finding — "we discovered that deals with more than three stakeholders in the first meeting close 40% faster, so we changed the discovery call checklist" — you have a strategic one.

Reactive sales ops teams are characterised by their task queue. They are always busy. They are never ahead. They respond to fires rather than preventing them. They produce reports on schedule but cannot tell you what the reports mean. They know the tool stack intimately but have never modelled what happens to capacity if the average deal size drops by 15%.

Strategic sales ops teams are characterised by their calendar, not their inbox. They have recurring analytical cycles, not just recurring report runs. They run annual territory reviews, quarterly process audits, and monthly pipeline health analyses. They have a point of view on the business and they share it, even when it is inconvenient. The metrics they track go beyond pipeline coverage — a full picture of what a strategic team should be measuring is laid out in RevOps Metrics: The 12 Numbers That Actually Matter.

THE FRAMEWORK

The full interrogation framework is Dispatch #001 — Pipeline & Forecast Framework. 38 questions across four sections that expose exactly where your pipeline data is lying to you and why your forecast never lands. $97. Instant download.

See the full framework →

How to Measure Whether Sales Ops Is Working

Sales ops is an enabling function, which makes its performance harder to measure than a quota-carrying role. But harder is not impossible. The metrics that matter fall into two categories: process health metrics and business impact metrics.

Process Health Metrics

These measure whether sales ops is maintaining the infrastructure of revenue. CRM data completeness — the percentage of required fields populated across active pipeline — is a foundational indicator. Forecast accuracy, measured as actual versus called revenue over rolling quarters, tells you whether the data and process infrastructure is producing trustworthy outputs. System adoption rates reveal whether reps are using the tools and processes as designed, or whether they have quietly reverted to spreadsheets and email. Time-to-territory assignment for new hires shows whether the ops function can execute when it matters. These are internal quality metrics. They do not make it into the board deck, but they are the foundation for everything that does. For a deeper view of how data quality specifically affects forecast reliability, see CRM Data Quality: Why Your Forecast Is Always Wrong.

Business Impact Metrics

These measure whether the infrastructure sales ops maintains is producing better commercial outcomes. Win rate trend over rolling quarters. Sales cycle length trend by segment. Pipeline coverage ratio versus historical conversion. Quota attainment distribution. Ramp time for new hires versus the prior year cohort. If these metrics are not improving over time, sales ops is maintaining a status quo, not creating value. That is a meaningful distinction, and it is worth having the conversation about it openly rather than hiding behind dashboard complexity.

One more metric deserves mention: decision latency. How long does it take from a commercial problem being visible in the data to a leadership decision being made? Strategic sales ops teams shorten this interval. They surface insights faster, frame them more clearly, and make the decision options obvious. When decision latency drops, the revenue machine responds faster. That is measurable, even if it takes a few quarters to see clearly in the outcomes data. Capacity modelling is a good example — a sales ops team that proactively runs sales capacity planning before headcount decisions are made is compressing decision latency on one of the highest-leverage choices a sales leader makes.

Common Failure Modes

Sales ops fails in predictable ways. The most common is being under-resourced for the scope it is asked to cover. A single sales ops analyst supporting 50 reps cannot do systems, process, data, and insight simultaneously. They will survive on triage and reactive requests, and over time they will stop trying to do more. Leadership reads this as low ambition. The analyst reads it as impossible expectations. Both are right.

The second failure mode is misaligned reporting lines. When sales ops reports directly to the VP of Sales, it becomes an extension of the VP's agenda. Analyses get filtered. Inconvenient findings get buried. The neutrality that makes sales ops valuable evaporates. The ideal reporting line is into the CRO or COO — someone with visibility across the full revenue motion and no incentive to protect one team's number.

The third failure mode is tool obsession without data discipline. Sales ops teams can spend enormous time evaluating and implementing new software while the existing CRM is full of junk. Technology does not fix data quality problems. It amplifies them. Before any new system goes live, the data question has to be answered: what will go into this system, who is responsible for it, and how will quality be enforced? If those answers are not clear, the implementation will fail and the sales ops team will spend the next year in cleanup mode. The relationship between clean data and reliable forecasting is direct — the mechanism is explained in detail at Why the Forecast Was Never Real.

The fourth failure mode is conflating sales velocity reporting with sales velocity management. Producing a velocity number is not the same as understanding what is driving it or what levers exist to change it. Reporting is table stakes. Diagnosis and prescription are what justify the function's existence.

Sales ops that only responds to requests is not an operations function. It is an expensive help desk with a Salesforce login.

The organisations that get sales ops right treat it as an analytical and architectural function, not an administrative one. They give it a clear mandate, adequate resource, a neutral reporting line, and the expectation that it will surface insights leadership has not thought to ask for. That is a high bar. Most organisations have not cleared it. But the ones that have tend to forecast within five percent, ramp reps faster, and lose fewer winnable deals to process failure. The operational rigour compounds. It just takes a decision to start building it properly.

DISPATCH #001

Pipeline & Forecast Framework

38 questions that expose why your pipeline data cannot be trusted and your forecast never lands. $97. Instant download.

Download the Framework — $97 See the framework →
Other Field Notes