Why RevOps Teams Can't Build Foundations While Fighting Fires

Why RevOps Teams Can't Build Foundations While Fighting Fires

Scale-up RevOps teams face entropy across data, people, process, and systems simultaneously. What it looks like inside — and what breaks the cycle.

Peter SterkenburgFebruary 23, 202612 min read
Peter Sterkenburg

Peter Sterkenburg

HubSpot Solutions Architect & Revenue Operations expert. 20+ years B2B SaaS experience. Founder of HubHorizon.

Monday morning. I open my laptop and the backlog has 47 tickets. Three Slack threads are already marked urgent. The VP of Sales needs pipeline accuracy numbers for a board deck due Friday. Marketing imported 8,000 records from a conference over the weekend. No deduplication, no format validation, just a raw CSV dump into the CRM. Sales leadership created five custom properties while I was offline because "they needed them for a new outbound motion." Customer Success wants a churn risk workflow scoped and built by next sprint.

I'm one of two people on the RevOps team. The company has 150 employees and just closed a Series B.

This isn't dysfunction. This is a scale-up operating exactly as designed. Growth creates demands. Demands converge on the team that connects everything: sales, marketing, customer success, finance. And that team is always too small.

I've lived this as my daily reality for years, not a thought exercise or a case study. The scale-up RevOps experience is a specific kind of chaos that people outside the function don't fully grasp. You're watching the foundation crack beneath your feet while everyone asks you to build another floor on top.

The four pillars under siege

RevOps frameworks love to talk about four pillars: data, people, process, and systems. Clean categories. Neat diagrams. In practice, all four pillars get hit at the same time, and by the same forces.

Data: Decaying at the speed of growth

The data pillar doesn't erode gradually in a scale-up. It erodes at whatever velocity the company is growing.

Conference imports bring records that don't match your naming conventions. New integrations sync data with different formatting assumptions. A sales rep creates a contact manually because "the form was too slow," leaving no job title, no lifecycle stage, no company association. A well-meaning manager builds a list import to "help the team" and introduces duplicates that take weeks to untangle.

I've seen a single CSV import double the duplicate rate on the Contacts database overnight. Nobody noticed for three weeks, until the marketing team asked why their email sends had jumped 40% with no increase in engagement.

According to a 2025 survey from the Revenue Operations Alliance, 75% of RevOps professionals cite data inconsistencies as the most frustrating part of their tech stack. That number doesn't surprise me. What surprises me is that it's only 75%.

A separate 2025 study from Openprise puts it more starkly: 70% of RevOps teams can't make strategic decisions because of poor data quality. Only 11% reported having data they'd call excellent. The other 89% are making revenue decisions on data they know is unreliable.

These inconsistencies map to the formal data quality dimensions (accuracy, completeness, consistency, validity, uniqueness, timeliness), and in a scale-up, all six take hits simultaneously. The import didn't just create duplicates (uniqueness). It also brought phone numbers in three different formats (validity), company names that don't match existing records (consistency), and Contacts missing job titles and lifecycle stages (completeness).

People: The bottleneck nobody budgets for

The staffing math that scale-ups consistently get wrong: RevOps headcount grows linearly while operational demands grow exponentially.

A 100-person company with one RevOps hire adds 50 employees in a year. Each new head generates system onboarding, reporting requests, process questions, and tooling needs. The original RevOps person now supports 150 people. Maybe leadership approved a second hire six months in. Two people for 150. Next year it's 225.

Those two people become the only ones who understand how the CRM is wired. They're the only ones who can build workflows, debug integrations, pull accurate reports, and explain why the data says what it says. They become the bottleneck for every system change. Not because they want to be, but because no one else can do it.

Burnout isn't a risk in this model. It's the operating assumption. And when one of those two people leaves, taking half the institutional knowledge with them, the remaining person inherits a system they only partially understand, maintained by someone who documented maybe 30% of what they built.

Process: Designed for last quarter, already outdated

Processes in a scale-up have a half-life measured in months. The lead routing logic that worked when you had 10 reps and two territories breaks when you have 40 reps across four regions. The deal stage definitions that made sense before the product expansion now force reps to shoehorn new deal types into old categories.

But what actually kills processes is exceptions. Every week, someone needs a one-time workaround. A deal that doesn't fit the standard flow. A campaign that needs a custom segment. An onboarding path for a customer type you didn't have six months ago.

Each exception is individually reasonable. Collectively, they're corrosive. "Just this once" happens 50 times and becomes the de facto process. The documented process exists in a Notion page nobody has updated since Q2. The real process exists in the heads of three people who've been handling the exceptions.

In a 2024 survey from BoostUp, 98% of RevOps professionals said process gaps are costing their teams revenue. Ninety-eight percent. Everyone knows the processes are broken. Almost nobody has the bandwidth to fix them.

Systems: The Jenga tower

The systems pillar is where the damage becomes visible. Eventually. Every "quick fix" workflow, every duct-tape integration, every automation built to patch a data problem caused by another automation — they all add a layer of complexity that nobody fully maps.

I've audited HubSpot portals with 200+ workflows where the team could confidently explain maybe 60 of them. The rest were built by former employees, or created during a crisis that's long since passed, or designed to fix a problem that was later solved a different way. But nobody deletes them because nobody knows what depends on them.

The CRM becomes a Jenga tower. Touching any piece might cause something else to collapse. So the team builds around the fragile parts instead of fixing them. Another layer of complexity, another sprint.

The instinct is to automate your way out. Buy a new tool. Build more workflows. But as one analysis of revenue workflow failures put it: automating an unclear process simply encodes confusion into software. Automation amplifies whatever system you already have. Clean systems get faster. Messy systems get chaotic faster. I've watched teams invest months building elaborate automation only to see reps revert to spreadsheets — because they trusted the spreadsheet more than they trusted the CRM.

They fail together

This is the part that frameworks miss. The four pillars don't degrade in sequence (data first, then processes, then people, then systems). They degrade simultaneously, triggered by the same events.

Take that 8,000-record conference import. It corrupts the data pillar (duplicates, missing fields, formatting inconsistencies). It bypasses the process pillar (nobody followed the import intake procedure because "we need these in the CRM before Monday's outreach"). It strains the people pillar (only the RevOps team can clean it up, and they're already underwater). And it forces the systems pillar to absorb a new cleanup workflow that adds to the Jenga tower.

One event. Four pillars hit. And it happens every week.

The compounding problem

Scale-ups are structurally hostile to operational foundations. This isn't a criticism. It's an observation about what growth does to systems.

A company growing 50% year-over-year adds roughly 50 new stakeholders per 100 existing employees. Each one arrives with CRM needs, reporting expectations, tool preferences, and "quick requests" that individually take 30 minutes and collectively consume entire sprints.

Meanwhile, revenue targets accelerate. Board expectations compound. New markets mean new data flows, new compliance requirements, new integration demands. Nobody pauses the growth plan to say "let's spend Q3 on data governance." Data governance doesn't have a revenue line in the board deck.

The research confirms the cost

Poor-quality data costs 31% of CRM admins at least 20% of their annual revenue. Only 16% say their revenue technology provides strong, data-driven insights that lead to revenue impact. The gap between what RevOps teams are asked to deliver and what their foundations can support widens at growth rate.

AI raises the stakes

Tools like HubSpot's Breeze promise to automate prospecting, scoring, and customer communication. But AI doesn't compensate for bad data — it amplifies it. A scoring model trained on incomplete records produces confidently wrong predictions. An email agent pulling from inconsistent company data sends messages that make your brand look careless.

This plays out most visibly in forecasting. Pipeline coverage looks like 3.5x on paper, but stale deals with outdated close dates inflate it. Real healthy coverage is closer to 1.2x. The CRM becomes a collection of opinions rather than a system of record. Leadership stops trusting the numbers. RevOps loses strategic credibility. The spiral tightens.

The demand-to-capacity ratio doesn't stabilise. It worsens every quarter that RevOps headcount doesn't keep pace with company growth, which is most quarters.

Signs you're stuck in the firefighting trap

If the previous sections felt uncomfortably familiar, this diagnostic might confirm it. These are the symptoms I've seen, and lived through, in every scale-up where RevOps is structurally under-resourced:

  • More than 60% of your RevOps work is reactive, responding to inbound requests rather than executing a roadmap you designed
  • You find out about data problems when a report breaks or a workflow misfires, not before
  • Stakeholders create properties, run imports, or add integrations without going through your intake process, because the intake process is too slow, or doesn't exist
  • Your "data cleanup" initiative has been restarted three or more times, each time stalled by higher-priority urgent work
  • New hires wait weeks to get the CRM configured for their role because the RevOps queue is backed up
  • You have workflows that exist only to fix data problems created by other workflows
  • The team's Jira or Asana backlog grows faster than it shrinks every quarter
  • At least one critical process exists only in someone's head, and everyone knows it
  • You've been "about to document that" for six months or more
  • Leadership asks for pipeline accuracy but won't invest in the data quality required to deliver it

If five or more of these apply, you're not in a temporary rough patch. You're in a structural trap where the work required to escape consumes the same bandwidth that the trap demands.

This is roughly Level 1-2 on a RevOps maturity model, where teams recognise the problems but lack the capacity or mandate to address them structurally.

Why heroics don't scale

I've been the person who held it together. Late nights reconciling data before a board meeting. Weekends rebuilding a workflow that broke after someone changed a dependent property. Carrying the architecture of the entire CRM in my head because writing it down would take time I didn't have.

It works. For a while. The heroic RevOps person, or the heroic duo, keeps the lights on through sheer will and deep institutional knowledge. They patch the cracks fast enough that most people never see them. They become indispensable.

And that's the problem.

You cannot manually monitor a CRM that 50 or more people modify daily. You cannot catch every bad import, every rogue property creation, every process workaround, every integration drift. You find out about data decay when a quarterly report produces numbers that don't match finance's spreadsheet. You discover process drift when a major deal falls through the cracks because the handoff didn't happen. You learn about system fragility when a workflow breaks in production and nobody knows what downstream processes depend on it.

Reactive RevOps keeps the lights on. But it never builds the foundation. You're always one step behind, cleaning up yesterday's damage instead of preventing tomorrow's. And when the hero leaves (through burnout, better offers, or just plain exhaustion) the organisation discovers how much was held together by a person rather than a system.

What I built instead

At some point, the frustration crystallised into a design question: if the foundation can't be maintained manually at scale, what would automated monitoring look like?

I wasn't trying to build something that auto-fixes CRM problems. That's a different beast. I wanted visibility: a way to know, at any given moment, whether the CRM foundation is intact or decaying. Without someone manually checking. Without waiting for something to break.

That question became HubHorizon.

It connects to a HubSpot portal and scores data quality across the six formal dimensions — accuracy, completeness, consistency, validity, uniqueness, timeliness. It evaluates property hygiene: how many properties are unused, how consistent the naming is, whether the schema makes sense or has grown organically into a mess. It measures association health — are Contacts linked to Companies, Deals linked to the right pipeline stages? It assesses AI readiness — is your data structured well enough for Breeze and other AI tools to actually work?

These are the things that decay silently when RevOps teams spend their days fighting fires. Declining completeness goes unnoticed until a segmentation fails. Duplicate accumulation hides until pipeline reports double-count. Property sprawl is invisible until new hires can't find the fields they need in a sea of 300 custom properties.

HubHorizon calculates a composite health score across these dimensions, a single number that tells you whether your foundation is holding or eroding. Not a replacement for the RevOps team. A dashboard for the infrastructure they don't have time to manually audit.

I built it because monitoring shouldn't require heroics.

Build the dashboard before you need it

If you're in the firefighting trap, three things will help more than anything:

Entropy is the default. In a growing company, data quality degrades, processes drift, and systems accumulate complexity unless something actively counteracts it. This isn't a failure of your team. It's physics. You need monitoring systems the same way a building needs smoke detectors. Not because you expect a fire today, but because you won't always smell the smoke in time.

Monitoring and fixing are different problems. Monitoring can be automated: track fill rates, duplicate rates, property sprawl, data freshness, and association completeness continuously. Fixing requires human judgement, stakeholder conversations, and prioritisation. Automating the first gives you the clarity to prioritise the second. A data quality audit is how you start, but a one-time audit becomes a recurring need when the CRM is under constant change.

Stakeholders need to see what their requests cost. The "quick requests" that drive RevOps entropy come from people who don't see the impact. When a VP can see that their team's unvalidated import dropped the data completeness score by 8 points and introduced 400 duplicates, the conversation changes. Visibility creates accountability.

RevOps teams don't fail because they're not good enough. They fail because the structural demands of a scale-up exceed what any small team can manually maintain. The answer isn't working harder. It's building the monitoring infrastructure that makes the invisible visible, before something breaks.

I built HubHorizon because I was tired of finding out about broken foundations after the damage was done.

Next: If this article described your reality, the follow-up covers how to break out of the firefighting trap — intake systems, prioritisation frameworks, and the sequencing that actually works.

Frequently Asked Questions

What are the biggest challenges for RevOps teams in scale-ups?

Scale-up RevOps teams face entropy across all four operational pillars simultaneously: data degrades at the speed of growth through conference imports, rogue property creation, and integration drift; processes designed for last quarter's headcount break as exceptions accumulate; systems become fragile Jenga towers of interdependent workflows nobody fully understands; and the people pillar is perpetually understaffed because RevOps headcount grows linearly while operational demands grow exponentially. The structural trap is that firefighting the immediate crises consumes the same bandwidth needed to fix the foundations causing those crises.

How many RevOps people does a scale-up need?

There is no universal ratio, but a common pattern is that scale-ups consistently under-hire for RevOps relative to their operational complexity. A team of two supporting 150 people — where both are the only people who understand how the CRM is wired — is a fragile single point of failure, not a sustainable structure. The calculation that matters is not headcount-to-employee ratio but demand-to-capacity ratio: the volume of incoming requests, the scale of data quality work, and the complexity of the system being maintained. When RevOps headcount does not grow in proportion to company headcount, the demand-to-capacity ratio worsens every quarter and the firefighting trap deepens.

Why do RevOps teams get stuck in reactive mode?

The reactive trap is structural, not motivational. When anyone in the company can surface a request through any channel — Slack, hallway conversations, a VP's direct message — everything becomes urgent and priority is determined by whoever asks loudest or most recently. There is no intake gate, no prioritisation mechanism, and no visibility into the real cost of ad-hoc requests. RevOps teams operating this way teach their organisations that capacity is infinite, which guarantees the queue grows faster than it shrinks. The firefighting trap escape guide covers the structural changes that break the cycle — starting with centralised intake as the prerequisite for everything else.

How does poor data quality affect RevOps teams?

Poor data quality forces RevOps into permanent damage control mode. A 2025 Openprise study found that 70% of RevOps teams cannot make strategic decisions because of poor data quality — they are operating on data they know is unreliable. The operational costs are concrete: cleanup from a single unvalidated import can consume weeks of capacity, duplicate records inflate database costs and skew segmentation, and inconsistent field values break every automation that depends on them. The strategic cost is worse: RevOps loses credibility when the numbers it produces cannot be trusted, making it harder to earn the investment needed to fix the underlying foundations.

Start your free portal health check at hubhorizon.io — connect your HubSpot portal in 30 seconds and see your data quality, property hygiene, and foundation health scored across all dimensions. View pricing plans for continuous monitoring, trend tracking, and exportable audit reports.


Peter Sterkenburg is the founder of HubHorizon, a HubSpot portal health and optimization platform. He's spent years in scale-up RevOps — building the systems, fighting the fires, and eventually building the tool he wished he'd had.