Digital Strategy Optimisation

The Only 5 Metrics That Matter Before You Run Another Experiment

Gorav Bassi
Gorav Bassi May 15, 2026 9:52:19 AM 4 min read

More data won't necessarily optimise your experimentation. In many cases, more data makes it slower.

When every dashboard is full and every meeting starts with a metrics review, it is easy to feel like the programme is well-instrumented. But there is a difference between having data and having the right data. And most teams, if they are honest, are spending more time producing reports than acting on what those reports are telling them.

Before you run your next experiment, there are five metrics worth knowing with clarity and confidence. Not because the others don't matter, but because these five tell you the most important things about where your programme stands and where it needs to go.

1. Conversion rate on your primary commercial journey 

This is the number your programme exists to move. Not average session duration. Not pages per visit. The conversion rate on the specific journey that matters most to the business, whether that is a quote completion, an application submission, a donation, or a product purchase.

You need to know this number precisely, by segment where possible, and you need to know how it has trended over the last six months. If you cannot answer both of those questions quickly and confidently, your measurement foundation has a gap that no amount of experimentation will fix.

Tools like GA4 will give you this if your goals and conversion events are set up correctly. The qualifier is important. Many teams discover at this point that their conversion tracking is either incomplete or inconsistent. If that is the case, fixing it should come before running another test.

2. Revenue per visitor 

Conversion rate tells you how many people complete a journey. Revenue per visitor tells you what each of those visitors is actually worth to the business.

This metric matters because optimisation decisions should be weighted by commercial value, not just by volume. A 0.3% conversion lift on a high-value journey is worth significantly more than a 1% lift on a low-value one. Without revenue per visitor as a reference point, it is easy to spend months optimising for metrics that feel meaningful but change nothing that matters.

In e-commerce and transactional contexts this metric is relatively straightforward to calculate. In lead generation, membership, or Higher Education contexts it requires more work, connecting digital conversions to downstream revenue outcomes. That work is worth doing. It is the difference between a programme that reports on activity and one that reports on impact. 

3. Time to launch a test

This is the metric most teams don't track, and it is one of the most revealing indicators of good digital performance.

Time to launch measures the average number of days between a test being added to the backlog and it going live. A well-functioning programme, with good tooling and clear governance, should be able to move from hypothesis to live experiment in one to two weeks for most test types. Larger feature experiments will take longer, but the baseline for straightforward A/B tests and content changes should be measurable in days, not months.

If your time to launch is consistently above three weeks for standard tests, something structural is creating friction. It might be a development dependency, a lengthy QA process, or a sign-off chain that has too many steps. Optimizely's experimentation tools, for example, are designed to allow marketing and product teams to build and launch tests with minimal developer involvement. If you are still routing every test through a development sprint, your tooling is not being used to its potential.

Tracking time to launch turns a vague sense that things are moving slowly into a specific, actionable problem.

4. Experiment win rate 

Not every test should win. In fact, a win rate of 100% is usually a sign that you're only testing things you're already confident about, which means you're not learning much.

A healthy win rate for digitally mature experimentation sits somewhere between 20% and 40%. Below that range, the programme may be testing too randomly, without enough insight driving the hypotheses. Above it, the team may be playing it safe rather than pushing into genuinely uncertain territory where the bigger gains tend to live.

Tracking win rate over time also tells you whether the quality of your hypotheses is improving. If you are running more tests but your win rate is declining, something about how you are generating and prioritising ideas needs to change

5. Backlog clearance rate 

How many items enter your backlog each month, and how many are launched.

If more is going in than is coming out, your backlog is growing. That is worth knowing because a growing backlog is not just an operational inconvenience. It is a signal that your programme's capacity, governance, or prioritisation model is misaligned with its ambitions.

Tools like Microsoft Clarity or Hotjar will surface behavioural insights quickly and reliably. The risk is that every session recording and heatmap generates more potential backlog items than the team can realistically action. Without a clear prioritisation framework, the backlog becomes a graveyard of good ideas rather than a pipeline of commercial improvements.

Backlog clearance rate is the metric that keeps your programme honest about whether it is set up to act on what it finds. 

Five metrics, one purpose 

None of these metrics exist for the sake of reporting. They exist to answer one question: is your experimentation set up to deliver consistent commercial improvement?

Conversion rate and revenue per visitor tell you what you are working towards. Time to launch and backlog clearance rate tell you whether your operating model can get you there. Experiment win rate tells you whether your thinking is sharp enough to make progress once you do.

If you have clear, trusted answers to all five, your programme is well-positioned to keep improving. If two or three of them are uncertain or missing, that's where to focus before running more tests.

Not sure where your programme stands?

The Digital Optimisation Health Check is a free five-minute diagnostic that covers the four foundations of commercial digital performance: strategy, data, operating model, and performance.

If you are not sure whether your current operating model is working as hard as it should, this is a good place to find out.

Take the diagnostic here.

 

Take the digital optimisation health check here here

home_01

Don't forget to share this post!

Related posts

Content Optimizely Data and Analytics

How to Measure and Improve Content Performance Using the Right Tools

Feb 3, 2026 2:27:16 PM
Andy Pimlett
AI Personalisation Optimizely Data and Analytics CRO / Experimentation

Optimizely Opal: The Intelligent Thread Your Martech Stack Has Been Waiting For

Jul 24, 2025 4:15:49 PM
Jon Seal
Personalisation Content CRO / Experimentation

The reality of personalisation: why most programmes underperform 

Dec 24, 2025 9:14:59 AM
Stephen Gillespie