Sales: “We have increased our Daily Active Users from 142k to 186k. Well done, everyone!“
Marketing: “Actually, my dashboard shows that DAU increased to 197k.”
Product: “Wait, what? I just ran a quick query, and it seems the correct number is 154k.”
Data Analyst: “Ah shit, here we go again.”
Working with inconsistent metrics is like playing chess with a pigeon; it’s meaningless. Despite being meaningless, inconsistency also has severe consequences for your business. It leads to worthless comparisons, erroneous results, and dangerous data cultures where no one can trust the data.
Because we can’t compare things that don’t mean the same thing, we can’t compare metrics if they produce inconsistent results. By comparing apples and oranges, we draw conclusions based on incorrect data, and no one can take action because their effect is unknown.
Fun fact: James E Barone actually successfully compared apples and oranges in his study “Comparing apples and oranges: A randomised prospective study”.
If an organization lets its metrics yield different results, everyone will slowly but surely lose confidence in the data. Therefore, inconsistent metrics result in a danger zone where all data becomes irrelevant.
Inconsistent metrics — What, Why, How?
The reason for inconsistent metrics is simple: we tend to define metrics in our organization differently depending on who creates them and where they are used. For example, a seemingly simple meter — like Daily Active Users (DAU) — can be configured in various ways because:
Maybe the user is active when they log in to the application?
Or perhaps, they are active if they perform a specific action?
What about if active means that the user spends a certain amount of time in the application?
Each of these points can be used to calculate the DAU, as each team may have a different meaning for active. But the problem is that each of these different definitions gives us a different result if we use them in reports, dashboards, and algorithms. And if we fail to agree on the correct definition, we will enter a danger zone and have to deal with the consequences of irrelevant information.
When every team defines metrics in their own way. Even worse, it’s not just teams that define metrics differently. We don’t usually have a single data tool for analyzing and processing data, but various applications, platforms, and AI / ML notebooks with their own models, languages, and definitions for calculating metrics. So if we have a common understanding of what DAU means, we still have to define it a little differently for each tool because tool X works differently than tools Y and Z.
When every tool uses its own models, languages, and definitionsGreat. Let’s say we’ve come to an understanding of what DAU means. We’ve also defined it the same way — but a little differently — in each tool, and they all yield the same results. So did we avoid the danger zone, and now we can live happily ever after? Well, no.
What happens if — and when—the definition or some part of the metric logic changes? How do we make sure every team and tool is aware of these changes and their implications for the company? Maintaining metric consistency is very difficult and time-consuming with traditional metric management.
For example, if we have a net income metric with the following formula: Net Income = Gross Profit — Operating Expenses — Other Business Expenses — Taxes — Interest on Debt + Other Income
If the tax rate changes, e.g., from 19% to 21%, it must be updated separately for every tool and query that uses net income. Did we forget to update some of our tools? Did we miss a few queries? Well, we just stepped into the danger zone — again.
Consistency arises from standardization
To ensure our metrics are consistent and keep delivering consistent results most efficiently, we must standardize them across the organization. With standardization, we make sure that apples are compared to apples. Everyone calculates the same metrics exactly the same way, updating the metrics is seamless, and we don’t have to spend time arguing whose report, model, or algorithm shows the correct numbers.
Standardization of metrics means that all metrics — whether DAU, net revenue, or something else — are defined in one place and can be used by all of our data tools and platforms. In this way, we can avoid rebuilding data models and duplicating calculation logic for each data tool separately. Instead, standardization creates a single source of metrics that provides all data consumers with a seamless way to consume the same, consistent results — anywhere — with the tools of their choice.
Do we need to create a new metric? No worries, all our data consumers can use it right away. Did the tax rate change again? No problem, let’s update it in one place, and all the tools will still yield the right results.
When every team and tool can rely on a single source of metrics. Currently, there are several different concepts for such a standardization process: metrics layer, metrics store, or headless BI. But despite the different terminology, they all strive to achieve the same goal — metric consistency across the whole organization.
Some technology companies have already begun to standardize their metrics, and we’ll undoubtedly see more companies adopt the concept in the near future.
LinkedIn: Unified Metrics Platform (UMP)
Airbnb: Minerva
Uber: uMetric
Spotify: ABBA
Leaving the danger zone
Standardizing metrics is a strategic decision we must make to avoid the consequences of inconsistencies and ensure that our company, operations, and employees do not face the horrors of the incorrect metrics. Because no one wants to experience it when the big boss comes in and asks — as calmly as Dumbledore — “did you put the correct number in the quarterly report?!?”
Consistent metrics help us trust our data, make more use of it, and tame our big boss’ inner Dumbledore. It’s time to stop playing chess with the pigeon and make sure all of our metrics, like daily active users, yield the same result no matter how, where, or who consumes them.
Comments