Operational Benchmarking: Are You Comparing Apples with Apples?

Quick show of hands: who loves benchmarking? Nobody? Not one?


In the world of banking, benchmarks usually cause a collective groan from the operations team. That’s probably because most benchmarking operations follow a path that looks, in part, like this:

Why is benchmarking such a pain?

In my experience, the challenge with benchmarking is that it’s notoriously hard to obtain meaningful benchmark data. This is for a couple of reasons:

  • Data access is limited as organizations are reluctant to share data with each other for benchmarking purposes.
  • Business models differ. For instance, if one bank focuses on broker-initiated by-to-let mortgages, while another drives most of its revenue from online retail customers, the comparison of unit time for ‘mortgage processing’ between the two banks is meaningless.

It doesn’t end there, though. Even if you get down to a level of granularity with your data where you can make meaningful comparisons, you then have to establish what that data is telling you.

For instance, let’s say that you find that it takes you 32 seconds to process an online international SWIFT payment when the best-in-class time is 12 seconds. To be blunt: so what? That fact on its own tells you nothing about where your delays are coming from. It might be a technological issue, such as an inefficient payments system; it might that your sanction screening tool is set to a very high threshold; you may have more large corporate payments to process than the best-in-class organization does.

Ultimately, the problem with most benchmarking exercises is that they are comparing outcomes, when they should be comparing processes.

Introducing OpsIndex from ActiveOps.

Around a year ago, ActiveOps set out to solve this problem. And after carefully building, testing and improving, we released OpsIndex into the world.

Simply put, OpsIndex allows organizations to benchmark themselves in meaningful ways against nearly 15 years’ worth of anonymized data from organizations using ActiveOps. We built our benchmarks to focus on dimensions of operational performance that we believe are truly relevant – the processes and capabilities of your teams that drive the outcomes which other benchmarks measure.

The dimensions we measure are:

  • Agility: your ability to flex resources to meet peaks and troughs in demand.
  • Effectiveness: how consistently you deliver to your customers in the time you want.
  • Focus: how much of your paid time at work you can spend delivering on your core responsibilities.
  • Efficiency: we calculate this as the difference between your team’s average productivity levels, and the optimum level of productivity that most people can maintain for a long period of time. The smaller the difference between those two, the higher the efficiency score.
  • Control: how consistently you execute successfully against a plan, using our Active Ops Management (AOM) rigour score.

These 5 elements are weighted by a ‘difficulty factor’ and then combined to give your organisation a balanced scorecard showing an overall performance score. The scorecard lets you benchmark against our entire client base, organizations in your region, and organizations in your industry, showing you your position relative to the top and bottom quartiles, and the median average. Better still, we can show you how your overall score and your score in each dimension changes over time.

Insight you can immediately act on.

Since we launched OpsIndex 6 months ago, it’s made a big difference to banks, BPOs and insurance firms for a few reasons.

Because we’re focusing on analysing processes rather than outcomes, our benchmarks give you immediately actionable intelligence. When customers use OpsIndex, their immediate priority is usually to dive straight into the detail and see how they could improve on their lowest scores, or areas of declining performance. The key question wasn’t “what’s my score?” but “how do I get better?” With OpsIndex, we’ve made finding that out easy.

As a result, we’re hoping that benchmarking in the future follows a process flow that looks more like this:

If you’d like to learn more about OpsIndex and what it can do for your organization, click here.



“Stuart has over 28 years of experience of leading change in service operations. His career has spanned project and programme management, strategy, consulting and leading operations divisions and functions. After 17 years with HSBC working in the UK and India, he moved to Abu Dhabi heading operations for ADCB.

Stuart joined ActiveOps in 2016 and leads its Customer Success function.”

Stuart Pugh, Chief Customer Officer, ActiveOps