Are Cognitive Biases Systematic Errors—or Adaptive Tools?
Why cognitive biases aren’t as error-inducing as we’ve been led to believe
A previous version of this post was published on Psychology Today.

When you come across articles about human decision making, whether in the popular press or scientific literature, the term cognitive bias is almost guaranteed to make an appearance. In fact, entire discussions—especially in popular outlets—are often dedicated to cataloging these biases and warning readers about the errors they supposedly cause.
But what exactly is a cognitive bias? The concept generally traces back to the seminal work of Tversky and Kahneman (1974), who defined cognitive biases as “systematic errors” (p. 1124) in reasoning. This definition has shaped how biases are studied, discussed, and even weaponized in arguments about human fallibility. Yet, for all its influence, this definition is deeply flawed. As I’ll demonstrate, most cognitive biases don’t produce the kind of systematic errors implied by this definition—and in many cases, they help us navigate the complexities of decision making with reasonable accuracy.
By clinging to a narrow, error-focused view of cognitive biases, we risk overlooking their more nuanced role in how we think and make choices. To understand why this definition falls short, we need to dig deeper into what Tversky and Kahneman really meant by “systematic errors” and examine the contexts in which biases may—or may not—lead us astray.
Are Cognitive Biases Really Systematic Errors?
Systematic errors can be defined as consistent or recurring mistakes that are predictable in specific contexts. While Tversky and Kahneman (1974) referred to cognitive biases as systematic errors in reasoning, they also, in the very same sentence, noted that “in general, these heuristics [which they claimed were the cause of cognitive biases] are quite useful” (p. 1124). This contradiction raises an intriguing question: If reliance on heuristics that lead to biases is generally useful, how is it that these biases consistently produce systematic errors?
One answer could invoke the law of the instrument—the idea that over-reliance on a specific decision-making strategy can lead to its inappropriate use in certain situations, thereby introducing systematic flaws. While this explanation is intuitively appealing, it rests on a problematic assumption: that any error in a given decision situation (e.g., situation X) must necessarily stem from cognitive bias. This reasoning implies a false dichotomy: if a decision leads to error, it must have been influenced by bias, and if it doesn’t, the decision must have been bias-free1.
This assumption oversimplifies the relationship between bias and error, particularly given the complexity of real-world decision making. Even studies that demonstrate moderate effects of biases in controlled settings fail to address whether those errors are systematic. To qualify as systematic, we would need consistent evidence showing that individuals tend to make the same mistake across highly similar decision situations. However, such evidence is notably scarce in the literature.
Perhaps We’re Talking Widespread Errors Instead
If Tversky and Kahneman’s claim was instead intended to suggest that errors associated with biases are widespread across people, this might explain the effects observed in laboratory studies. Yet, evidence from real-world settings or studies using realistic decision scenarios tells a different story. In most cases, people are not plagued by systematic reasoning errors.
Let’s start with the anchoring effect. Furnham and Boo (2011) reviewed studies on the topic and found that its impact is highly context-dependent. The anchoring effect “results from the activation of information that is consistent with the anchor” but only if “judges consider the anchor to be a plausible answer” (p. 37). When anchors are clearly implausible (e.g., extremely low or high) or when judges have a high degree of confidence in their knowledge, anchors have little effect on judgment. This suggests that anchoring is not a consistent or systematic bias. Rather, it’s a strategy people use when expertise is lacking—they look to available information that may help inform their judgments. Anchors, in this sense, can act as a form of information leakage, providing a starting point rather than an error-inducing trap.
Another example comes from Pennycook and Rand (2018), who studied susceptibility to fake news—a context where cognitive biases are regularly implicated2. In their research, participants who relied on intuitive decision making—presumably where biases are most active—and those who engaged in effortful reasoning—which is thought to override biases—both consistently performed well at distinguishing real news from fake news. Effortful reasoning provided a slight edge, but the difference was not large enough to justify labeling intuitive reasoning as fundamentally flawed or bias-driven.
Finally, we’ll consider Schnapp et al. (2018)’s investigation of cognitive errors in an academic emergency department. Over eight months, encompassing approximately 104,000 visits3, they identified only 271 revisits (0.3 percent) and found that just 52 of these were linked to physician cognitive errors. This represents a minuscule fraction of total cases, suggesting that either cognitive biases were seldom prevalent and, if they were, they did not consistently lead to errors.
The literature is replete with evidence like the studies I reported here. The evidence suggests that most cognitive biases are far more pronounced in laboratory designs, where participants are placed in unfamiliar situations where they have limited knowledge or expertise. In such environments, biases are likely to emerge as participants rely on heuristics to navigate novel challenges. However, in real-world contexts—where individuals draw on expertise, experience, and relevant information—these biases appear far less consistent or impactful. This calls into question the validity of labeling cognitive biases as inherently systematic or even widespread errors.
Generally, Biases Are More Likely to Aid Decision-Making Accuracy
It’s clear that people possess biases, and stronger biases often have more pronounced effects. The more convinced we are of something, the less likely we are to change our opinions or derive an alternative conclusion. This tendency is particularly pronounced in situations where evidence is ambiguous or less compelling4.
However, cognitive biases are rarely discussed in the same context as biases more broadly. Biases, in general, are not inherently faulty. They represent tendencies—adaptive shortcuts that guide our thinking and behavior—and they’re beneficial most of the time (Haselton et al., 2009). Yet, cognitive biases are uniquely framed as error-producing mechanisms. How can cognitive biases be so distinct from regular biases?
The answer, of course, is they aren’t. Cognitive biases have been labeled as error-inducing primarily because researchers focus on tasks designed to highlight the specific situations where biases lead to incorrect decisions5. This approach overlooks the many contexts in which those same tendencies result in sufficiently accurate—and usually more cognitively efficient—outcomes. In other words, the identification and labeling of a decision-making tendency as a cognitive bias arises from a selection effect6: researchers focus on scenarios to confirm the hypothesis that a given bias produces errors. Ironically, the process itself reflects a form of confirmation bias7.
So, Where Does That Leave Us?
It’s important to clarify that human decision making is not error-free. People undoubtedly make mistakes in their decision making, and, yes, biases can sometimes contribute to those errors. However, the likelihood of biases introducing errors depends on specific circumstances, including:
Limited expertise or experience: When individuals lack the knowledge or experience needed to rely on effective heuristic strategies, errors are more likely to occur.8
Perceived cost of errors: When one type of error is perceived as significantly more costly than another, people often bias their decisions toward minimizing the more costly outcome. This tendency is central to error management theory.
Strongly-held beliefs or values: Motivated reasoning, where decisions are swayed by deeply-held or self-serving values and beliefs, can adversely affect judgment.9
Conflicting, ambiguous, or limited information: In situations with insufficient clarity, people are more likely to rely on emotions or default to bias-driven decision making.10
While these scenarios demonstrate how biases can lead to errors, they do not justify the conclusion that human decision making is fundamentally flawed because of biases. Instead, biases should be understood as adaptive tools—strategies that have evolved to help us navigate complex, uncertain, and fast-moving environments. While imperfect, they are remarkably effective in many real-world situations.
There’s also the challenge of evaluating accurate versus erroneous decisions without relying solely on outcomes. Otherwise, this approach risks falling into the trap of outcome bias, where the quality of a decision is judged entirely by its result rather than the reasoning behind it.
The test used most often is some adaptation of the Cognitive Reflection Test, which I touched on when writing about the false dilemma put forth in discussions of System 1/System 2 decision making.
I estimated this based on their claim that the hospital received approximately 156,000 annual visits, which equates to 13,000 per month. The study occurred over 8 months, which translates to approximately 104,000 total visits.
Perhaps the best example of this was a study conducted by De la Fuente et al. (2003) regarding jury bias.
This echoes arguments made by Page (2023) and Collins (2015).
This could be called selection bias, but I hesitate to equate bias with error in this regard.
Haselton et al. (2009) argued that biases are often the result of heuristics we employ for deriving fast, frugal, and (quite often) accurate conclusions. Brighton and Gigerenzer (2012) further added that it is a myth that “more time, more information, and more computation would always be better” (p. 7).
This is the phenomenon that produces most laboratory-based evidence of cognitive biases.
This assumes accuracy itself is the sole goal and that there is no benefit to other self-serving interests. Ultimately, it boils down to the subjectively derived acceptability of trade-offs.
Whether this class of errors would really be considered actual error is open to debate.
You made a good point about heuristic benefits, Matt, but per your title: aren't cognitive biases just adaptive system effects -- a consequence of finite systems adapting imperfectly to complex and changing environments?
Don't we see similar in (say) evolutionary systems where some traits are beneficial, while some are dominant yet serve no conceivable function? They exist because they're easy to replicate yet haven't been selected out yet. (Perhaps some are selected for simply because they're familiar.)
To an information engineer like myself, the interesting question isn't whether adaptive systems have biases (which ones don't?) but how these can be self-detected, evaluated and managed according to current and anticipated challenges. I expect that this will become more pressing over time as Machine Learning is integrated into business decisions.
Business continuity demands consistency, while efficiency demands accuracy. Change the training or architecture too much between ML generations and you get a more accurate system that nobody can now predict. For business risk, that's juggling kittens and chainsaws. (And heaven help the public sector agency that tries to use it for policy implementation when the ML is generationally 'upgraded'.)
And maybe there's an emerging cultural context too. The humanities have seized on 'cognitive bias' and used it for competitive gaslighting, so that it has become a favoured pejorative in some circles: ideological rock-chucking being preferred to rigorous testing under acceptable tolerances.
That doesn't much help with trust or reason, I expect.
How Anthropic, the makers of the Gen AI model Claude, addressed the question of LLM 'bias'. Something that might be interesting to discuss in this context.
https://www.anthropic.com/research/claude-character