Why More Evidence Isn’t Always Better: Part 2
What Makes Evidence Useful Depends on the Decision Context
In the last post, I explored the tension between classical and ecological rationality—two ways of thinking about how to approach decision making. Classical models prioritize structure and comprehensiveness, while ecological rationality emphasizes fit: matching the decision strategy to the demands of the environment. One key insight was that decision quality isn’t just about how much evidence we use—it’s about whether the evidence we use is actually useful for the decision at hand.
But that usefulness depends on more than just the volume or type of evidence—it depends on how we define evidence in the first place. And that brings us to this post: What counts as evidence? And what makes it useful?
What Is Evidence, Really?
At the most basic level, evidence is just information that helps us decide what to do. That includes scientific studies and quantitative data, of course—but also stakeholder feedback, situational cues, organizational knowledge, prior experience, our emotions, and even the intuitive patterns we’ve internalized over time.
In practice, though, we tend to treat evidence as if it only applies to certain types of information—especially information that comes from formal, external, or scientific sources. That’s understandable. In many academic and professional contexts, it’s common to elevate peer-reviewed research or statistical data to the top of the evidence hierarchy.
But real-world decision making rarely works that way.
In most contexts, evidence comes in many forms—and its usefulness depends less on how it was produced and more on whether it helps us make a better decision in the situation we’re actually facing.
Let’s use a couple of examples to illustrate.
In the first example, a company is deciding whether to introduce a remote work policy. Sure, leaders can review the scholarly empirical evidence on the topic, and it may add some value. But unless they also consider current productivity trends, input on the needs and preferences of employees and managers, and data from IT regarding support for remote infrastructure, the decision is likely to be shortsighted.
In the second example, we’re deciding whether to switch to a new software platform. In this instance, there likely isn’t going to be much in the way of scholarly empirical evidence, so you instead decide to turn to performance benchmarks. Unsurprisingly, the data provided by the vendor looks great—but so does data provided by most vendors. What you’ll really need is a mix of other evidence: IT input on implementation demands, employee feedback on usability, historical data on past rollouts, and projected cost savings from the folks in Finance. Each type of evidence answers a different question—and together, they help ensure the decision accounts for both technical and human realities.
While there’s some overlap in which evidence is useful for helping to decide across both scenarios, the types of information that matter most—and the weight we give them—depend heavily on the context. What makes a piece of evidence useful in one decision might make it irrelevant, or even misleading, in another. And that’s the key point: quality can’t be judged in the abstract.
Evidence Quality Isn’t Absolute
In academic and policy settings, certain types of evidence—like randomized controlled trials or meta-analyses—are often treated as the gold standard. And in some contexts, that’s warranted. If you're choosing between two medications, a well-designed RCT is likely to be more useful than someone’s personal experience.
But outside those narrowly controlled environments, quality isn’t so easily ranked. We often assume some evidence is better than others—without first asking what we're actually trying to decide or whether that evidence helps us do it.
A meta-analysis might offer statistical power but provide little insight into a specific organizational decision. Stakeholder perspectives may not generalize, but they can offer relevant situation-specific information. Put differently: the quality of evidence is relative. It depends on what you’re trying to decide, what constraints you’re facing, and what kinds of information will actually move the decision forward.
Decision Criteria Shape What Counts as Good Evidence
Evidence isn’t just filtered through the situation (i.e., the decision context)—it’s filtered through the values, goals, and constraints of the person (or group) making the decision. That psychological context (the other aspect of the frame of reference) helps define the decision criteria: the standards we use to judge whether one option is better than another.
And here’s the key: those criteria are rarely universal. They stem from the underlying values that matter to someone in a particular context. For example, when buying a car, one person might prioritize how it looks or feels to drive, while another might focus more on fuel efficiency or long-term maintenance costs.
And in many situations, our decision making is even more complicated because we’re not just focused on a single criterion. We’re often trying to balance multiple goals—evaluating which criteria we prioritize, which trade-offs we’re willing to make, and what good enough looks like given the situation. Buying a car might involve sacrificing a bit of style for better gas mileage or paying slightly more upfront for a feature that reduces future hassle. And evidence can’t tell us how to weigh those trade-offs. Those judgments depend on the values and priorities of the person making the decision.
And those criteria shape which kinds of evidence are actually useful1. They help define which information is relevant, how trade-offs should be evaluated, and what it means to reach a workable outcome. Without clear criteria, it’s hard to know whether a piece of evidence moves us closer to a good decision or just adds noise.
In practice this means that evidence is seldom inherently high quality or universally relevant. Its utility depends on how well it supports evaluation based on decision criteria and situational constraints. Information that’s useful in one context may be irrelevant—or even misleading—in another.
A predictive attrition model might be helpful when you’re planning next year’s staffing budget—where the goal is accuracy over the long term. But if a department manager calls in sick and you need to decide who’s covering their workload today, that same model is useless. What matters more in that moment is a quick check-in with the team lead who knows everyone’s capacity—or even your own mental snapshot of who’s up to speed.
This is exactly where traditional approaches to evidence-based decision making often fall short. They tend to rely on classical assumptions: that scientific evidence is inherently superior, that there’s a prescriptive process for how evidence should be gathered and used, and that intuition is a low-quality input to be minimized or corrected.
But those assumptions overlook the fact that intuition often shapes the decision process itself. It helps us identify what matters, narrow the field of options, and judge whether a decision feels workable in the current context. Ecological rationality doesn’t dismiss scientific evidence—it reframes its value based on how well it fits the demands of the decision.
That distinction also highlights a core limitation of classical models: their inefficiency. A more classically rational approach assumes that we begin by identifying all possible options—say, every car on the market—and then evaluate them against a comprehensive set of criteria to determine the best one. Only after compiling that list do we decide which evidence we need to compare them.
An ecologically rational approach works differently. It often begins with an intuitive response—a few car models that come to mind based on past experience or early research—and then expands outward as needed, gathering evidence selectively to evaluate those options in context. The same goes for problem solving. If we have a problem and an intuitively plausible solution, we don’t stop to brainstorm every possible alternative before diving into the evidence. We start with what seems promising and only expand the search if something feels off. Ecological rationality treats evidence search as a goal-directed activity—guided by fit and utility, not exhaustive coverage.
Bringing It Into Focus
Across these first two posts, the case should now be clear: good evidence isn’t defined by format, volume, or prestige—it’s defined by usefulness. Classical models often assume that better decisions come from collecting more evidence and evaluating all the options. But ecological rationality reminds us that decision making is almost always constrained by time, energy, and uncertainty—and that what counts as “good evidence” depends on how well it helps us reach a workable, context-sensitive decision.
In the final post, I’ll pull these ideas together and focus on application. How do we decide when we have enough evidence? How do we know when to keep searching, and when to stop? And how do we adapt our decision-making strategy so we don’t fall into the trap of overthinking—or underthinking—the decisions that matter most?
That’s where we’ll go next.
And, correspondingly, which evidence can be filtered out.
On a serious note (keeping it separate from the other one), the points made here have relevance in so many areas of life. In my series, I have on my list to write about the problems faced by institutions, in particular highly top-down ones, in gathering and assessing the right evidence as part of the necessary regular course of operation.
Thanks for the second part, Matt. This is more evidence that more evidence isn't always better ;-)