Why More Evidence Isn’t Always Better: Part 1
How ecological rationality reshapes how we think about expert judgment and evidence use
From medicine to management (and countless other disciplines), evidence-based decision making has become a point of emphasis. And it makes sense. After all, who wants to make decisions that aren’t based on evidence?
While it sounds straightforward enough—base your decisions on evidence—it’s a lot more complicated than it seems. Which evidence should we use? How much is enough? And how do we pull it all together in a way that actually improves the decision?
Think about choosing a movie. You could analyze every genre, compare critic scores, watch trailers, and poll your friends. But most of us either go with something that “looks good” or waste 40 minutes scrolling until we’re too tired to care. It’s not about finding the best option—it’s about finding one that’s good enough, fast enough.
And that’s often how we approach more consequential decisions too—even when we don’t realize it.
At one extreme, we could try to collect all the evidence, sort through it, and use it as the basis for our decision. But that approach ignores a few important realities: some evidence is more relevant than other evidence, not all evidence is high quality, and humans have limits when it comes to processing and weighing information.
At the other extreme, we might rely entirely on intuition—trusting our gut without looking for additional input. But that assumes that our intuitive response is reasonable, we’ve got the expertise and experience to make a good call without any other evidence, and more evidence wouldn’t add value.
As is often the case, effective evidence-based decision making typically lies somewhere in between these two extremes. That was the core argument a few colleagues and I made in an article published in Organizational Psychology Review1.
Here, I am going to provide a less academically dense version of our argument. I originally planned to tackle all this in a single post. But as I started fleshing out what that might look like, it became clear that one long post wasn’t feasible. So I’ve broken it into three parts, with links to some of my prior Substack posts as a way to keep things more concise.
This post focuses on the issue we were wrestling with at a more abstract level: the inevitable tension that arises when we pit the classical decision model against one focused on ecological rationality. In the next post, I’ll dig deeper into the utility of evidence and when it makes sense to seek out more of it. The final post will wrap things up with an integrative and applied perspective.
What Makes a Decision Rational, Anyway?
Most of us were taught—explicitly or implicitly—that good decisions follow a classically rational process. You identify the problem, gather information, weigh the pros and cons of each option, and then choose the best one. It’s neat, linear, and logical.
This approach is often referred to as the classical decision model. It’s built around a set of assumptions that sound perfectly reasonable in theory:
The problem is clearly defined.
The goals are well understood.
All possible options can be identified.
Options can be evaluated objectively.
You can identify the best or optimal option.
If that sounds familiar, it’s because this is how many of us were taught to make decisions: follow a linear, logical process. It also happens to underlie a lot of business school training, consulting frameworks, and organizational decision tools. It’s often treated as the rational or correct way to make decisions—especially when we want to demonstrate that we’ve thought things through carefully and systematically.
And in ideal conditions—with unlimited time, clear goals, and access to perfect information—this approach really can work. That’s why it’s often effective in domains like aviation or medical diagnostics, where decisions are high-stakes, variables are well defined, and strict protocols support comprehensive evaluation.
But in most contexts, those ideal conditions don’t hold—and trying to force the classical model anyway can create more problems than it solves. In the real world, most decisions are made under uncertainty. We often don’t know what the outcomes will be or even what all the options are. The goals might be ambiguous or in conflict. And even when we can identify a handful of viable options, we’re constrained by time, attention, cognitive bandwidth, and competing priorities.
That’s not to say people never try to follow the classical model. In some cases, they do—and in highly structured or high-stakes environments, it might even be encouraged. But the real issue is this: when the model’s assumptions don’t hold—and they often don’t—then why would we treat it as the rational way to make decisions?
In those cases, the model doesn’t just create extra work—it can actively misguide us. We may end up focusing on low-quality evidence that feels compelling, overlooking time-sensitive opportunities, or applying standards that don’t reflect the actual goals of the decision. The cost isn’t just inefficiency—it’s misalignment.
Classical rationality’s emphasis on collecting all the evidence, analyzing every option, and finding the single best or optimal solution is extremely resource-intensive—and so we need to weigh the costs and benefits of such an approach. It takes time to gather that much evidence. It takes effort to analyze and synthesize it. And it takes additional time and effort just to make the decision. If the costs of that process outweigh the benefits of a slightly better outcome, then it’s not a rational use of your resources—it’s overkill.
And that’s where ecological rationality comes in.
Ecological rationality doesn’t reject the idea of using evidence to make good decisions. What it rejects is the idea that a reasonable decision is always the one that follows an idealized, comprehensive process. Instead, it defines a decision as rational to the extent that it fits the structure of the environment in which it’s made. That includes alignment between the goals, constraints, available information, the decision maker’s cognitive resources, and even the specific heuristics employed.
From this perspective, more evidence isn’t always better. What matters is whether the information you’re using is good enough to help you make a sufficiently confident, context-sensitive choice. Sometimes that means slowing down and seeking more input. Other times it means trusting an intuitive response that’s been shaped by expertise and experience.
The key shift is this: the quality of a decision isn’t judged by how thorough the process was—it’s judged by how well the decision strategy matches the situation. In that sense, ecological rationality isn’t about cutting corners or avoiding analysis. It’s about making decisions that make sense given the realities of the environment.
Further adding to the tension is that we rarely make decisions the way the classical model prescribes. Even when we try, we often fall short—not because we’re lazy or irrational, but because the demands of the situation don’t allow for it. Instead, we rely on something much more automatic and experience-driven: an intuitive sense of what the right call might be. And that’s where we turn next—how people actually make decisions, and why intuition plays a much bigger role than we often acknowledge.
How We Actually Make Decisions
In a prior post, I talked about how decisions often start with an intuitive sense of what the right call might be. That initial reaction—what dual-process theory would label a System 1 response—is fast, automatic, and typically shaped by our expertise and prior experience. We don’t reason our way into it. It just feels like the right move. As I wrote there:
Almost every decision produces one or more quick, cognitively frugal intuitive responses. These responses are influenced by both stored knowledge from past experiences and novel information available in the present situation (Pennycook et al., 2015). They rely on salient pieces of information as the basis for initiating possible decisions10. If a response (let’s call it Intuitive Response 1 or IR1) emerges quickly or inspires sufficient confidence, we may adopt it. If not, we expand our search for additional information before finalizing the decision. This expanded search may either reinforce confidence in IR1 or lead to an alternative response (e.g., IR2, IR3, or a previously unconsidered option).
That’s not a flaw in how we make decisions. It’s how we’re wired. The intuitive response provides a starting point, not a final answer. And it reflects the way humans have evolved to navigate complexity: by recognizing patterns, applying learned shortcuts, and making satisficing—or accurate enough—judgments without overloading the system.
Sometimes that intuitive response is enough. Especially when we have relevant experience or the stakes are low, it often makes sense to go with what feels right. But other times, intuition alone doesn’t cut it—either because we lack sufficient experience, the situation is unfamiliar, or something in our intuitive response doesn’t quite align with the available information2. In those cases, we pause, re-evaluate, and often feel compelled to expand our search for additional evidence before deciding. our intuitive response.
Of course, this process isn’t foolproof. We don’t always notice when something is off. Conflicting cues can be subtle, easy to overlook, or simply dismissed if they don’t register as meaningful. And even when we do notice that something doesn’t quite line up, we don’t always slow down or seek more input. Sometimes we plod forward anyway—because we’re tired, short on time, or already leaning toward a preferred outcome. In those moments, it’s not just bounded rationality at work; it’s motivated reasoning—our tendency to favor conclusions that align with what we already want to believe, even if the evidence suggests otherwise.
When Ecological and Classical Rationality Align (and When They Don’t)
So far, we’ve looked at how most real-world decisions begin with intuition—and how ecological rationality helps explain when those intuitive responses are enough and when they’re not. But that raises a broader question: how do these ideas connect to the classical decision model we started with? Are these two views of rationality always at odds—or can they be calibrated?
The answer is: they can align—but only under the right conditions.
Classical rationality tends to align with ecological rationality when the stakes are high, expertise and experience are limited, the decision context is unfamiliar, and there’s sufficient time and resources to support a more comprehensive process. In those cases, it makes sense to apply a more structured, classically rational approach—what amounts to a backward elimination process, where the goal is to systematically weed out less desirable options until we arrive at a confident decision.
But when some baseline level of expertise exists, or when the decision maker has relevant experience to draw on, it may make more sense to use a forward selection process—starting with an intuitive response (IR1) and expanding as needed. That doesn’t mean the resulting process will always be quick or minimal. In fact, a forward selection approach might ultimately approximate a classically rational process in terms of time and effort. The difference is that expansion is strategic, not assumed. It’s driven by what the situation demands in order to reach a satisficing outcome—not by the presumption that more evidence is always better.
This layered decision process is captured in the visual I’ve included above, which depicts decision making as a series of concentric circles. At the center is IR1, our intuitive response—typically guided by the recognition heuristic. As we move outward, each layer reflects a deeper level of search. Search Expansion 1 occurs when conflict, unfamiliarity, or lack of experience prompts us to question IR1. Search Expansion 2 reflects further input-seeking to increase confidence. The outermost layer, Search Expansion n, represents the most extensive and effortful process—one that aligns with the assumptions of traditional evidence-based decision making and the classical decision model. Ecological rationality doesn’t deny the value of that level of expansion—it just reminds us that such an approach only makes sense when the context truly warrants it.
In Part 2, I’ll dig deeper into one piece of that equation: the role of evidence—what makes it useful and why that utility will vary depending on the decision context.
The link is to the pre-print on Researchers.One. For the actual formal reference, see Grawitch et al. (2025).
This can happen when the situation elicits multiple conflicting intuitive responses, or when we notice context-relevant information that doesn’t quite align with our initial impression.