Digital Strategy CRO / Experimentation

5 Signs Your Digital Optimisation Is Slowing Down

Pam McGee
Pam McGee Apr 24, 2026 1:00:41 PM 3 min read

You might not notice immediately when a digital optimisation programme starts to lose momentum. 

The team is busy. Content goes out on schedule. Tests are running. Reports land every Monday. From the outside, everything looks fine. But results aren't moving the way they should. Good ideas sit in backlogs. Decisions take longer than they ought to. And there's a nagging sense that the programme isn't delivering what it was supposed to.

Here are the five signals that tell you something has gone wrong.

Signal 1: Test frequency has dropped and nobody decided to slow down 

Mature optimisation programmes run experiments continuously. Smaller behavioural tests run week to week. Larger feature experiments run monthly. If your team ran ten tests last quarter and three this quarter, and there was no deliberate decision to change pace, something has gone wrong.

The most common causes are a development team pulled in too many directions, a QA process weighed down by layers of sign-off that didn't exist eighteen months ago, or a backlog so large and politically contested that prioritisation has ground to a halt.

Ask yourself: what does it actually take to get a test from idea to live? If the honest answer involves more than two or three steps and more than one team, you've found your friction. Map the process end to end. Remove every step that isn't adding genuine value or managing real risk. You'll almost certainly find things in there that exist because of habit, not necessity. 

Signal 2: Insights are piling up but nothing changes as a result  

Your analyst writes the report. The findings go into the deck. In Tuesday's meeting, someone points out that a key journey is losing 60% of users at the same step it was losing them six months ago. Everyone agrees it needs fixing. The meeting ends. Nothing gets assigned. Nothing gets scheduled. Four weeks later, it's in next Tuesday's deck again.

This is the insight gap. It's where most optimisation value disappears, and it has nothing to do with the quality of your data. It's a decision-making problem.

For every significant finding, three things need a named owner:

1. Who decides what happens next

2. When that decision gets made

3. How it gets prioritised and into the team's next sprint

Without clear answers to all three, insights pile up. The team looks productive. The programme stands still.

Signal 3: Decisions that should take a day are taking three weeks 

A test result comes in. The numbers are conclusive. The data points clearly in one direction. Everyone in the room agrees on what it means. And then comes the response you've heard before: "We should probably run this past the wider team before we act on it."

Three weeks later, you're still waiting.

This isn't a people problem. It's a governance problem. The approval structures that felt sensible when the programme launched have calcified into something that moves at the pace of the slowest calendar. Optimisation doesn't work on a monthly committee cycle. Evidence goes stale. Competitors move. Opportunities close. 

The fix is to define decision rights before you need them. Most optimisation decisions don't require sign-off from anyone senior. They need a pre-agreed rule: if a test reaches a certain confidence level and a certain level of impact, the team acts. Set that threshold once and remove the conversation entirely. 

Signal 4: The backlog grows faster than the team can clear it 

A backlog is healthy. A backlog that doubles every quarter is a warning sign.

When ideas, issues, and opportunities accumulate faster than the team resolves them, one of two things is usually true. Either the team can spot problems clearly but lacks the capacity or authority to fix them. Or everything has been labelled high priority, which means nothing actually is.

A well-run optimisation programme works from a backlog that is short, scored, and reviewed regularly. Items are ranked by commercial impact and effort. The team works through them in order. If yours is a long list of things nobody has been willing to formally deprioritise, that conversation is overdue. 

Signal 5: The programme measures how busy it is rather than what it achieves 

Tests run. Tickets closed. Pages published. These numbers fill a status update and they're easy to present to a board that wants reassurance something is happening.

They tell you almost nothing about whether the programme is working.

When teams start measuring activity instead of impact, it usually follows a difficult period. Results have disappointed. Budget scrutiny has increased. Output numbers buy some breathing room. But they don't improve conversion rates, grow revenue, or tell you whether the programme is earning its budget.

The numbers that matter are commercial ones. Conversion rate on key journeys. Revenue per visitor. Cost per acquisition. If those aren't at the centre of every programme review, that's where to start.

Momentum

What all five signals have in common

Running more tests won't fix them. Neither will a new dashboard or a bigger team.

Every one of these signals points to the same underlying problem: the foundations aren't right. The strategy isn't tied clearly to commercial outcomes. Data isn't driving decisions. The operating model has the wrong governance and cadence. The programme is set up to deliver in bursts rather than improve continuously.

Fix the foundations and the signals stop. Leave them and they get worse.

Want to know where your programme stands?

We built a free five-minute diagnostic for digital and marketing leaders who want an honest picture of their optimisation programme. It covers the four foundations of commercial performance and gives you specific recommendations based on your answers.

Results are instant. Get your personalised score and next steps below.

Take the digital performance diagnostic here here

home_01

Don't forget to share this post!