Why Policy Makers Rarely "Follow the Science"
How Values and Biases Shape the Use of Evidence in Decision Making
A previous version of this post was published on Psychology Today.

Climate change. Vaccination. Nuclear power. Organic food. Genetically modified organisms (GMOs). These are just some of the complex issues that have been the focus of policy debates in recent years. For every single one of them, there is insight to be gleaned from looking at the science. Policymakers frequently invoke the phrase "follow the science" as a guiding principle for their decisions, claiming to prioritize evidence over ideology. But how often do these claims reflect reality?
As Head (2015) noted in his discussion of evidence-informed policymaking, the relationship between evidence and policy is rarely straightforward. While the systematic use of research and evaluation is championed as a cornerstone of effective governance, the application of evidence is often inconsistent, shaped by political priorities, organizational practices, and societal values. This creates a gap between the ideals of evidence-based decision-making and its practice in the political arena.
In this context, it is essential to consider how policymakers interpret and use evidence. What counts as evidence? How is it evaluated? And, critically, how do ideological biases influence the way evidence is framed and applied? This post explores these questions, examining the challenges of following the science in a world where evidence is mediated by values, priorities, and political dynamics.
Science Tends to Be Really Messy
In a prior post, I argued that science is defined as a (1) systematic, (2) way of building and organizing knowledge, that (3) requires testable explanations and predictions. I further argued that “works that lack systematic rigor or fail to generate testable predictions…fall outside the realm of true science.” However, even meeting these basic criteria does not guarantee that the resulting science is of high quality.
This leads to some issues with “following the science.” First, not every thinkpiece, journal article, or published report should be considered science. While many thinkpieces offer compelling arguments1, the extent to which these arguments are grounded in rigorous scientific evidence—or whether they cherry-pick studies to support preconceived conclusions—varies widely2.
Second, not all “academic” articles represent high-quality scientific evidence3. Simply publishing in a journal does not ensure that the work represents robust or reliable science. Peer review, while intended to act as a gatekeeping mechanism, is not immune to flaws or biases, further complicating the picture.
Third, scientists themselves are not impervious to biases. Ideological beliefs, funding sources, or professional incentives can (un)wittingly shape the questions scientists choose to study, the methods they employ, and how they interpret their findings. These biases can subtly influence the trajectory of scientific inquiry, sometimes leading to studies designed to confirm pre-existing beliefs rather than challenge them.
When you add this all up, it becomes clear that the average layperson—and politicians—may lack the tools to critically evaluate what the science actually says. Politicians, in particular, often rely on secondhand information conveyed through advisors, interest groups, or the media. This chain of communication increases the likelihood of biases being introduced through selective framing, incomplete evidence, or misinterpretation of findings.
Following the science, then, is far from straightforward. It requires discerning high-quality evidence from noise, recognizing the limitations and biases inherent in scientific inquiry, and critically evaluating how evidence is presented and applied in the policy-making process. These complexities underscore the importance of approaching science with both a critical eye and an understanding of its inherent messiness.
Ideological Biases & Values
We tend to surround ourselves with people who happen to be more like us—not just in terms of physical traits (e.g., age, sex, race) but also in how we perceive the world. These shared perspectives often reflect a deeper alignment of values, which shape the ideological biases we bring to resolving problems and making decisions.
Ideological biases are not the same as values but are derived from them. Values are broad, guiding principles (e.g., freedom, security, equality) that help us evaluate trade-offs in decision making. Ideological biases, by contrast, represent the application of those values through a specific belief system. For example, a libertarian ideological bias may emphasize personal autonomy as the highest value, while a progressive ideological bias might prioritize equality over autonomy. In this way, ideological biases reflect not just what we value but how we interpret and apply those values within the framework of a broader ideology.
Although it’s tempting to categorize ideological biases along simplistic lines, such as left versus right or conservative versus liberal, the reality is much more nuanced. AllSides (2020), for example, provides a list of 14 ideological biases that are an outgrowth of people’s values, background, and prior experience. While not authoritative or exhaustive4, the list highlights how biases manifest in diverse and complex ways that defy neat classifications.
Ideological biases, in themselves, are not inherently problematic. They act as interpretive lenses, helping individuals evaluate and prioritize decision options. For instance, someone with a strong bias toward fiscal conservatism might prioritize the financial cost of a government program, while someone less fiscally conservative might prioritize the potential societal benefits of the program. Politicians, like anyone else, bring these biases into their decision making, often leading to differing conclusions about the same policy proposal. These differences often reflect how individuals weight competing considerations rather than an objective "right" or "wrong" conclusion5.
These differences can be valuable if diverse viewpoints are given due consideration. However, excluding or marginalizing conflicting viewpoints risks creating one-sided decisions, increasing the likelihood of flawed outcomes and unintended consequences6. Complex decision making benefits from the tension that arises when diverse perspectives are debated and weighed.
Moreover, biases are not uniform across issues. A person might demonstrate an authoritarian bias regarding gun ownership, driven by security-focused values, but a libertarian bias concerning abortion, driven by values of personal freedom and autonomy. This variability often occurs due to issue-specific values activation: the specific values activated in a given context shape the biases that emerge (Verplanken & Holland, 2002)7. However, ideological biases do more than amplify values; they frame those values within doctrinal beliefs and relational dynamics, further polarizing how issues are perceived and prioritized.
Bias Strength
The strength of a bias plays an important role in how likely individuals are to revise their conclusions in light of new, conflicting information. The stronger the bias, the less likely individuals are to adjust their conclusion, even when presented with high-quality evidence that challenges their position. For instance, someone who is deeply committed to opposing restrictions on gun ownership is likely to scrutinize, dismiss, or find flaws in evidence supporting the potential benefits of such restrictions, rather than revising their stance.
Bias strength also varies among individuals depending on the issue at hand. Not every topic activates strongly held values for everyone, which leads to variability in the degree of bias people bring to a given issue. This variability often results in at least four distinct groups in political debates, as illustrated in Figure 18. These groups reflect differences in bias strength and value alignment, illustrating how biases influence decision making.
Groups 1 and 2: These groups tend to hold stronger biases that increase the likelihood of reaching a predetermined conclusion (e.g., supporting or opposing Position A). Members of these groups are less likely to shift their views, even when confronted with evidence that directly contradicts their position9.
Groups 3 and 4: These groups are less strongly biased and, consequently, more open to considering and integrating new information, even when it conflicts with their initial position.
Political parties, special interest groups, and other constituencies often exploit these biases to craft persuasive platforms. By emphasizing specific values (e.g., security, freedom, equity), they activate corresponding biases to rally support for their positions. For example, gun control advocates may frame the issue in terms of safety and security, while opponents may focus on personal freedom and constitutional rights.
The framing of issues also plays a pivotal role in bias activation. Sher and McKenzie (2006) demonstrated that whether a glass is perceived as half full or half empty depends on whether it was originally full or empty. Similarly, political debates often frame issues in ways that activate specific values and biases, influencing how evidence is interpreted and decisions are made. This dynamic underscores how bias strength and framing interact to shape policy decisions.
The Challenge of Following the Science
To understand the difficulty of “following the science,” let’s revisit some key points:
Science is inherently messy: Differentiating between signal and noise, as well as between high-quality science and nonscience, is challenging.
Bias strength affects decision making: The more strongly people value something, the more likely they are to develop a bias that skews their interpretation of evidence.
Scientists are not immune to biases: While the process of science is designed to minimize bias, scientists’ own values can influence research questions, methodologies, and interpretations.
Policymakers operate within value-laden frameworks: Politicians align themselves with constituencies whose values resonate with their own, creating a feedback loop that reinforces bias10.
Selective framing shapes evidence evaluation: Constituencies and interest groups often present evidence in ways that align with their values, increasing its acceptability to aligned policymakers.
Given these factors, it becomes nearly impossible for policymakers to “follow the science” without their biases influencing how they interpret and act on evidence. Even when scientific consensus exists, applying it to policy involves subjective judgments about trade-offs, such as weighing the costs and benefits of tax cuts, border policies, or climate initiatives. Science alone cannot dictate these decisions; they inherently involve values and priorities.
This interplay of bias, framing, and values reveals the challenges of evidence-informed decision making. While scientific evidence is often important as an underlying aspect of policy, acknowledging the influence of biases and values is essential for crafting policies that balance evidence with societal priorities. Ignoring these complexities risks oversimplifying the relationship between science and decision making, leading to misleading claims about what it means to “follow the science.”
Including—hopefully—this one
Peer review is supposed to help with this when it comes scientific research, but the degree to which it effectively does so is debatable.
I have even seen beliefs espoused by the general public that the presence of an article in an academic journal ipso facto means it is high-quality evidence.
There are almost always opportunities to identify nuances that complicate any higher-order categories.
What pundits and those on social media often fail to recognize is that their agreement with a decision is may based on the (mis)alignment between their biases and the decision that was made rather than whether it was objectively a correct or incorrect decision.
We have seen this in many decisions in which sufficient attention was not paid to those with different biases, such as the intelligence community’s conclusion that Saddam Hussein had weapons of mass destruction and Kennedy’s decision to approve the Bay of Pigs invasion.
We could complicate this by expanding the number of values that are strongly activated. This would lead to substantially more than 4 groups. This regularly occurs when, for example, politicians will endorse one aspect of a bill but not another aspect of it. This likely results from the moderate to strong activation of one or more values that are aligned with the position but also one or more values that conflict with it. Thus, Group 3 might become Group 3A, Group 3B, etc.
These groups represent a slight alteration in nomenclature from the categories I used in my post on the Ideological Lens. The reason for this is that these categories do not necessarily reflect ideological thinking, as strongly activated values do not necessarily result from the doctrinal beliefs of an ideology. They also do not necessarily possess the relational element necessary for zealotry or aversive non-adherence.
This is a big reason why it is important to have representatives of all four groups shown in Figure 1. Without representation, it is much more likely that decisions will be heavily weighted toward confirmatory evidence, with much less attention given to refuting evidence. Furthermore, if an insufficient number of people fall into Groups 2, 3, and 4, there is little need for actual consideration of competing evidence before decisions are made - increasing the likelihood that new laws and regulations will fail to have their intended benefits and may result in unintended consequences.
I was glad to read this article, Matt. It's a pithy summary that would be good to trot out when chatting to policy analysts, and I want to bookmark it.
But in determining what policy ends up implemented (and how), I think there are social factors that matter more than the psychology of policy-design.
Regardless of how well policy is designed (and you can do a great job of making it evidentiary and designing it for robust implementation), policy-selection is generally prioritised on sociopolitical factors. That's patently true in democratic legislature (notorious for hiding how the sausage is made), but you can equally find the same in business and community policy (think body corporates, social clubs and sporting groups.)
Outside academia and the idealised world of policy analysis I can think of very few examples where policy-selection is based purely on evidence of cost-efficacy, even where that evidence exists.
If I had to distill it, I'd say that while policy validity might have a lot to do with robust evidence, consensus-building seldom does.
Or to unpack mechanisms more, you could look at the political science pub, *The Dictator's Handbook: Why Bad Behaviour is almost always good politics* (2011) The 'bad behaviour' described centres on the deliberate implementation of bad policy. It explains how selection mechanisms work against policy-evidence but optimise cronyism, regardless of what decision-making system is used.
TL;DR: Psychological bias matters; 'small p' political objectives matter more.