Why More Evidence Isn’t Always Better: Part 3
Practical Lessons for Evidence-Based Decision Making in Professional Settings

In the first two posts (Part 1 and Part 2), I outlined why the classical model of decision making—which underlies many definitions of evidence-based practice—often fails to (1) reflect how real decisions get made and (2) align with ecological rationality. That doesn’t mean the classical model is inherently flawed, or that it should never be used. But it does assume that we can fully identify all options, gather comprehensive evidence, and systematically narrow down to the best choice. In most real-world situations, that’s not actually feasible (or even possible).
More importantly, the classical model tends to ignore the costs and trade-offs involved in pursuing comprehensiveness. It treats the potential for improvement as justification for additional effort—without asking whether that effort is worth it. By contrast, ecological rationality starts with a more basic premise: that good decision making is about fit. We begin with an intuitive sense of what might work. We expand outward when necessary in order to reach a threshold of confidence in our decision. And we judge the usefulness of evidence not by its format or formality, but by whether it helps us move toward a workable decision for the context in which we’re making it.
That argument has clear implications for how we structure evidence-based decision making (EBDM) in professional (and even personal) settings. In this post, I want to explore those implications—especially the disconnect between how decision making is often prescribed in professional settings and how it actually unfolds in practice. The goal isn’t to dismiss formal processes, but to suggest a more ecologically rational approach to evidence use—one that prioritizes fit, relevance, and purpose over comprehensiveness for its own sake.
Implication #1: Most Decisions Are Biased by the Intuitive Response
Despite the formal structure often prescribed by traditional EBDM models, most decisions in practice don’t begin with a blank slate or a comprehensive search. A doctor doesn’t need to research every possible symptom combination to know what might be ailing you, and you don’t need to brainstorm every possible meal option before jotting down a grocery list.
Instead, these decisions begin with what we referred to in the original article as IR1—an intuitive response to the situation at hand. This initial judgment might be based on recognizable patterns that stem from the match between prior experience and context-specific cues. And whether we recognize it or not, that initial response can bias how we interpret additional information, which alternatives we consider, and how much evidence we seek.
That doesn’t mean IR1 is always wrong—or always right, for that matter. In some cases, it reflects meaningful experience and provides a solid foundation for action. In others, it may be based on outdated assumptions, limited exposure, or incomplete recognition of the current context. A lot depends on the match between prior experience and present conditions—and whether there’s important new information that needs to be taken into account.
Either way, EBDM processes built on the assumption of full deliberation from the start may be mismatched with how decisions actually unfold. People rarely begin by generating all possible options and weighing each one equally. Instead, they start with a small set of plausible, intuitively-derived options—perhaps as few as one—and then decide whether it’s worth expanding the search.
For organizations, this has important implications. If we acknowledge that decisions are often shaped early by intuitive responses, we can build processes that don’t just rely on intuition blindly—but help decision makers evaluate its value. That means developing the skills to reflect on where the response came from, whether the context matches past experience, and whether the situation calls for further search. The goal isn’t to default to intuition—it’s to make a more informed meta-decision about whether it’s enough to guide action or needs to be supplemented or reconsidered.
Implication #2: Expansion Should Be Strategic, Not Assumed
If most decisions begin with an intuitive response, then the role of evidence isn’t to replace intuition—it’s to help evaluate whether that initial response holds up. Sometimes, the intuitive answer is enough. Other times, something about the situation triggers doubt, conflict, or concern. That’s when we expand the search.
In the article, we described this as moving outward from IR1 through a series of search expansions—each one guided by the decision maker’s confidence, the complexity of the problem, and the presence of new or conflicting information. Maybe you’re rolling out a marketing campaign that worked last year, but something about this year’s response rate feels off. That’s a cue to dig deeper.
But here’s the key: expansion isn’t automatic. It’s a response to perceived uncertainty, misalignment, or risk—not a default step in every decision.
Traditional EBDM processes often build in expansion as if it’s always necessary: more evidence, more input, more comparison. But that kind of expansion is only justified when the situation warrants it. Otherwise, it becomes costly and inefficient—especially in fast-paced or resource-constrained environments. Why collect more evidence when you’re already confident the current data points in the right direction?
A more ecologically rational approach would help decision makers recognize when expansion is needed—and how far it should go—based on the context, the stakes, and the decision maker’s degree of confidence in a given response option1.
For organizations, this means building decision processes that encourage targeted search—not comprehensive review. That might include checklists or prompts to surface uncertainty (“Does this option feel off?” “Is something missing?”), clear triggers for expanding the evidence base (e.g., high consequences, conflicting stakeholder views), or guidelines for stopping the search once a satisficing option has been identified. The goal isn’t to cut corners—but to make sure that expansion adds value rather than just volume.
Implication #3: Evidence Use Is Inseparable from Decision Criteria
In theory, evidence is supposed to inform decisions. But in practice, evidence only becomes useful when we know what we’re trying to achieve—and what trade-offs we’re willing to make to get there. That means the decision criteria we use—the standards by which we judge whether one option is better than another—play a central role in determining which evidence matters.
Those criteria aren’t fixed. They vary based on context, goals, values, and constraints. In one case, cost might be the driving concern. In another, it’s fairness, speed, or alignment with stakeholder expectations. And in many cases, we’re not just applying one criterion—we’re balancing multiple criteria and making trade-offs. That balancing process determines which kind of evidence we seek, how we weigh it, and what we consider satisficing options.
If you're buying a new laptop, your top priority might be battery life—until you realize you’ll mostly be using it at your desk, and now price or processing speed takes over. But even then, how much processing speed you need likely varies based on what you plan to use it for, which will inevitably affect what price range is available.
Traditional EBDM approaches often treat the goal of decision making as improved accuracy—but rarely stop to ask accuracy in relation to what? That framing assumes that more or better evidence naturally leads to better decisions, without clearly defining the criteria we're trying to satisfy.
If we don’t identify those criteria, we risk searching for the wrong kinds of evidence or overemphasizing information that feels rigorous but isn’t actually relevant. Why would we make battery life a top priority for our laptop when it will seldom ever be unplugged? Why would we allow processing speed alone to govern our decision when we’re working on a budget?
Worse, we may end up interpreting evidence selectively—fitting it to whatever assumed priorities are most salient in the moment, rather than evaluating it in light of clearly articulated goals and constraints. It’s like convincing yourself you to need to incur the extra cost of a high-end graphics card—not because you actually need one, but because it happens to be what the best-reviewed model includes.
For organizations, this means decision support systems need to surface and clarify criteria from the start. That doesn’t require formal multi-criteria models. It can be as simple as asking: What matters in this decision? What are we trying to accomplish? What are we willing to trade off? The clearer those answers are, the easier it is to judge which evidence is useful—and which is just noise.
Implication #4: Development Should Focus on Fit, Not Formality
Many organizational approaches to evidence-based decision making emphasize following structured, step-by-step processes: identify the problem, gather the evidence, evaluate the options, choose the best solution. On paper, this looks like a neutral and systematic way to ensure quality decisions. But in practice, it often reinforces a classically rational model—one that assumes decision makers can and should always follow the same comprehensive process, regardless of the context.
But that’s the way many professionals were trained, but this kind of training places a premium on formality: using the “right” kind of evidence, following the “correct” sequence, documenting each step. But as I’ve argued throughout this series, good decision making isn’t about formality—it’s about fit. The value of a decision strategy depends on how well it aligns with the context, the decision maker’s experience, and the demands of the situation. A rigid process may look good from a compliance standpoint, but it can backfire when it fails to match how decisions actually need to be made. Why should a manager run a thorough cost-benefit analysis just to approve a $50 software upgrade the team already knows will save hours each week?
Instead of developing professionals that treat every decision like a textbook case, organizations should focus on helping them recognize when it makes sense to trust their intuition, when it’s necessary to expand the search, and how to calibrate the amount of evidence they need. That means developing people to think ecologically—not in terms of maximizing thoroughness, but in terms of matching the decision strategy to the task. Sometimes that task is as simple as choosing which client email to respond to first—and you don’t need a formal, three-step process for that!
This doesn’t mean abandoning structure entirely. In fact, structured approaches can help guide thinking and support consistency. But structure should be flexible, not prescriptive—scaffolded in a way that supports judgment rather than replaces it.
Putting It All Together
The goal of evidence-based decision making isn’t to follow a rigid formula or to collect as much evidence as possible. It’s to make decisions that are well-matched to the task at hand—given the goals, constraints, and context in which they’re being made. And that’s where an ecologically rational approach comes in: it helps us focus not on what decision making should look like in theory, but on how to make it work better in the real world.
In the end, the goal isn’t to follow intuition—or to override it automatically. It’s to make smarter decisions about our decision strategies: when to trust our initial response, when to slow down, and how to use evidence more deliberately. An ecologically rational approach doesn’t replace conscious judgment—it helps us sharpen it. And when we do that well, we don’t just make better decisions—we make more contextually grounded ones.
In some cases, that confidence threshold may never be met—especially in high-risk or ambiguous environments. In those situations, sticking with the status quo can itself be a reasonable choice.
Waiting for the next trilogy!