A previous version of this post was published on Psychology Today.
In today’s polarized discourse, a tension frequently a tension frequently arises between those who base their positions strictly on scientific evidence and those who prioritize other ways of knowing, such as lived experience. While the term lived experience has gained significant traction, it would be more accurate to describe this concept using the term anecdotal evidence1. The tension has created two increasingly entrenched camps: one elevates anecdotal evidence above science, while the other insists science is inherently superior. This binary perspective oversimplifies decision making and fuels polarization.
Rather than advocating for scientism or the primacy of lived experience, this post explores the nuanced roles both play in decision making. It also serves as the foundation for a series examining how evidence, ideology, and bias shape our understanding and choices.
The Value of Science
At a broad level, science can be defined as “a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.” This definition suggests three key elements that define science:
It is systematic.
It builds and organizes knowledge.
It requires testable explanations and predictions.
Not all journal articles or reports meet these criteria. Works that lack systematic rigor or fail to generate testable predictions, such as pseudoscience2 or ideologically driven research3, fall outside the realm of true science. Even legitimate scientific studies vary in quality and have limitations—such as small sample sizes or unexamined variables—that constrain their applicability. Moreover, a single study is rarely definitive4; instead, science is a cumulative process that builds organized knowledge over time.
Science’s true value lies in ability to refine our understanding by ruling in some variables and ruling out others. While science helps us make general predictions about phenomena, its application to specific individuals or contexts can be limited by unknown factors or complexity. For example, drug X might effectively treat condition Y, but variations like genetics or environment can influence its efficacy across different populations.
A systematic approach enables science to refine predictions over time, even when some variables remain unknown. While incomplete knowledge can limit precision, the iterative nature of science ensures it evolves and improves, offering a reliable framework for understanding and addressing uncertainty. But what about the value of lived experience, anecdotal evidence, or just plain old experience?
The Value of Anecdotal Evidence
Initially, I flirted with titling this section "The Value of Lived Experience," but I opted to focus on anecdotal evidence instead after reading Casey (2024). Although the two are related, they are not synonymous. Lived experience refers to the qualitative, first-person perspective of navigating specific life circumstances, offering insights into an individual’s subjective reality. Anecdotal evidence, on the other hand, is a broader concept that encompasses information derived from personal experiences or observations, often used to support or challenge broader claims. Unlike lived experience, which focuses on the richness of individual meaning, anecdotal evidence serves as a type of informal data—sometimes drawn from multiple experiences—used to inform decision making or arguments. For the purposes of this discussion, I will use the term anecdotal evidence to represent the insights we derive from both lived experiences and other personal observations.
Anecdotal evidence is “evidence collected in a casual or informal manner and relying heavily or entirely on personal testimony,” and it plays a role in daily decision making. For example, we adjust our buying patterns based on experiences we’ve had with different vendors or products. Over time, these experiences create biases and inform heuristics we use to navigate decisions.
Anecdotal evidence often addresses aspects of life that science cannot easily quantify. Science can’t tell you that your sister is extremely trustworthy with secrets, but anecdotal evidence from past experiences can. Similarly, while science cannot predict whether you’ll enjoy the food at a new Indian restaurant, your anecdotal evidence—based on prior dining experiences—may offer useful insights.
However, a problem surfaces when we attempt to generalize from “my experience is…” to “…therefore, other people’s experience also is.” The anecdotal evidence on which many of our beliefs and biases are based rarely, if ever, generalizes universally. In some cases, it may not even generalize to anyone but ourselves. Just because you’ve had a bad experience at Burger Barn doesn’t mean all, most, or any other people have had the same bad experience.
This distinction highlights both the value and the limitation of anecdotal evidence. While it’s indispensable for personal decision making, its broader applicability requires careful scrutiny and context. When properly integrated, it can complement, rather than contradict, more systematic forms of evidence, such as science.
Tension Between Science & Anecdotal Evidence
Science emphasizes systematic evidence and testable hypotheses, with conclusions based on analyzing observations collected from samples or series of studies over time. Anecdotal evidence, by contrast, represents the personal observations and experiences we accumulate in daily life. The tension arises because anecdotal evidence often doesn’t align neatly with scientific findings, forcing us to reconcile these discrepancies.
One way to resolve this discrepancy would be to default to science, what might be referred to as scientism. This perspective can be summarized as “Science says, so it must be true.” The problem, though, is that while science provides reliable generalizations, it doesn’t always permit easy application to specific situations or individuals. For example, research might suggest that women, on average, are more extraverted than men, but this doesn’t mean every woman we meet is an extravert5. If we rely solely on science without considering individual or situational factors, we risk:
Ignoring context-specific evidence that could better inform our decisions in the moment.
Failing to act on low-probability outcomes, even when doing so might be warranted.
An alternative approach is to default to anecdotal evidence. This perspective might be expressed as, “If science contradicts my personal experiences, then science is wrong.” Here, anecdotal evidence is elevated as the primary form of evidence, relegating science to a secondary, supportive role. While this approach values personal insights, it creates challenges for policy development and decision making. Policies overly reliant on anecdotal evidence often prioritize the perspectives of a specific stakeholder group at the expense of more generalizable, systematically collected evidence. This can lead to unintended negative consequences and reduced efficacy6.
Neither of these extremes represents an adaptive way of resolving the discrepancy between science and anecdotal evidence. A more balanced approach involves integrating scientific evidence into our stored knowledge—not to replace anecdotal evidence but to recalibrate the assumptions we bring to decision making. For example, we can:
Reduce overconfidence in anecdotal evidence’s universal applicability: Recognize that personal observations are not always representative of broader trends or applicable to others.
Critically evaluate policies: Support policies grounded in systematic evidence while remaining open to the contextual insights that anecdotal evidence provides.
Refine biases and assumptions: Use scientific evidence to refine biases or preconceptions derived from personal experience.
In other words, science can help us to make more informed bets. It allows us to calibrate our predictions and adjust our decisions based on a broader understanding of probabilities and outcomes. This doesn’t mean ignoring anecdotal evidence or situational nuances; rather, it means being more deliberate and reflective about how we weigh these elements in decision making7. Over time, this calibration can make us less overconfident in our assumptions and more attentive to the complexities and subtleties of each unique situation. By carefully weighing the strengths and limitations of each, we can navigate decisions with greater precision and adaptability.
This nuanced interplay between science and anecdotal evidence is only part of the story. In the next post, I’ll delve into how our goals and the pursuit of accuracy can sometimes collide, further complicating decision-making processes and shaping the ways we navigate uncertainty.
I came across an interesting piece by Casey (2024) that discusses what differentiates lived experience from experience, though I’m not sure most of those who use the term lived experience are aware of this distinction. I will get into this distinction to some degree below.
Pseudoscience is a form of non-science that portrays itself as being scientific. More on that in a later post.
Ideologically driven research often fails to meet the criteria for science because it prioritizes a specific outcome over systematic inquiry, often selectively choosing data or methods that support predetermined conclusions, ignoring conflicting evidence or alternative explanations, and framing conclusions in ways that resist falsification.
The media consistently fails to keep this in mind, as we are regularly bombarded with headlines claiming “science says…” while only discussing one particular recent study.
This would be an example of an ecological fallacy.
This happens regularly in government, where laws are passed to placate a particular stakeholder group without considering the potential impacts on other stakeholder groups or whether the policy itself has any scientific evidence to warrant is implementation.
Chin (2020) wrote a good piece that describes some systems that might be helpful on this front, such as Tetlock’s Superforecasting or Duke’s Thinking in Bets.