DATA-DRIVEN DECISIONS CAN GO WRONG
When a Harvard Business Review article caught my eye with its headline "Where Data-Driven Decision-Making Can Go Wrong," I was intrigued to delve deeper into the topic. As someone who regularly follows HBR's monthly publications, I wanted to understand the nuances of how data, despite its crucial role in modern business, might sometimes lead us astray in decision-making processes.
The article touched on some common mistakes leaders make when interpreting data. They often draw conclusions that seem perfect on the surface but don’t hold up under closer scrutiny.
First off, there’s the classic error of confusing correlation with causation. Just because two things happen together doesn’t mean one caused the other. The eBay example in the article really stuck with me: they assumed their Google ads were driving sales in certain markets because they noticed higher sales where more ads ran. But here’s the kicker—those markets already had people who were more likely to shop on eBay anyway. The ads weren’t causing the sales bump; they were just showing up in places where sales were naturally high. This reminded me that uncovering true relationships in data requires more than just a surface-level look. You need to dig deeper, question your assumptions, and ask whether other factors could be at play.
Another mistake the article highlighted is underestimating the importance of sample size. A small sample might give you results, but they could be misleading. Larger samples help capture outliers and edge cases, which are often critical for making sound decisions. Imagine testing a new product with only 20 users versus 2,000 users—the difference in insights would be massive. With a larger group, you can spot rare but important issues that would otherwise fly under the radar.
Then there’s the trap of focusing too much on what’s easy to measure while overlooking harder-to-quantify impacts. Many companies zero in on metrics that are simple to track, like costs or short-term gains, but they miss the bigger picture, like long-term benefits or subtle shifts in customer behavior. The article makes a strong case for using more nuanced approaches to measurement, even if they’re more challenging.
The key takeaway for me? Avoiding these data pitfalls isn’t just about better analysis—it’s also about creating a culture where people feel comfortable questioning assumptions. When teams feel safe to challenge the status quo, they’re more likely to spot issues in the data or point out where the analysis might have gone wrong. Leaders should embrace diverse perspectives and encourage discussions that dig into the “why” behind the numbers.
In the end, this article reminded me that while data is incredibly powerful, it’s only as good as the way we interpret and use it.
Credit: HBR/Sept/Oct 2024