Nudging: Helping or Manipulating Decision Makers?
How Nudging Shapes Decisions and When It Crosses the Line
While not a direct extension, some of what follows is loosely based on one of my prior posts for Psychology Today.
In my previous post on choice architecture, I discussed how the way choices are structured can shape the decisions we make—sometimes without us even realizing it. In some cases, choice architecture is simply meant to present options in a more neutral manner (e.g., sections of a menu, requiring the decision maker to make a choice). But often the structure of choices—whether we’re talking defaults, framing, or the specific options available or highlighted—is deliberately designed to steer people toward one of those choices. When this happens, we move beyond general choice architecture and into the realm of nudging.
Nudging, as popularized by Thaler and Sunstein (2021), is based on the idea that people don’t always make optimal decisions for themselves, whether due to inattention, uncertainty, or competing short-term interests. Nudges are designed to gently push people toward choices presumed to be in their best interest without restricting their ability to choose otherwise. And it’s this presumption—that nudging aligns with the decision maker’s best interest—that differentiates nudging from more exploitative uses of choice architecture.
While nudging has been widely adopted in policy, business, and public health, there’s ongoing debate about its effectiveness and ethical implications. Is nudging really an effective tool for improving decision making, or is it an overrated intervention that sometimes backfires? And how different is nudging, really, from other, more exploitative forms of choice architecture? Let’s take a closer look.
Nudging vs. Choice Architecture
Choice architecture is all about how choices are structured, but structuring choices isn’t always the same as willfully choosing a structure to influence behavior in a particular direction—which is what nudging is. So, while all nudges are a form of choice architecture, not all choice architecture involves nudging.
Nudging is an intentional approach to choice architecture that attempts to shift behavior without forcing a particular choice. Thaler and Sunstein (2021) define nudging as a non-coercive intervention that “alters people’s behavior in a predictable way” while keeping all options available. Nudging does not prohibit choices—it simply makes some options more appealing or more likely to be chosen.
For example, consider a cafeteria layout:
Choice architecture is simply how food is arranged—whether it’s by category, alphabetical order, or another system.
A nudge would be deliberately placing healthier food at eye level or labeling some items as “smart choices” to encourage people to select them.
Other classic examples of nudging include:
Opt-out organ donation policies, where donation is the default but people can choose otherwise.
Social norm messaging, like “Most people in your area have already voted,” designed to encourage civic participation.
Calorie labeling on restaurant menus, which makes certain health information more salient but does not prevent diners from choosing less healthy meals.
But this leads to an important question: If nudging is just structuring choices in a way that helps people, why does it generate so much debate? The answer lies in intent and influence.
Choice architecture is unavoidable—every decision environment is structured in some way. Nudging, however, is a deliberate intervention to steer behavior based on what the choice architect has decided is in the decision maker’s best interests. This means that nudging is never neutral—it always assumes what the “better” decision is for people and then steers them in that direction. And it’s grounded in the philosophy of libertarian paternalism.
Libertarian Paternalism
Libertarian paternalism is the idea that nudges help people make better choices while still allowing them freedom to choose otherwise. Unlike traditional paternalism, which outright restricts choices (e.g., banning unhealthy foods), libertarian paternalism preserves choice but guides people toward the right option.
A classic example of libertarian paternalism in action is automatic enrollment in retirement savings plans. People are free to opt out, but since inertia and procrastination often prevent them from enrolling, making savings the default option increases participation rates. Advocates argue that this helps people overcome short-term biases and act in their own long-term financial interests.
This justification assumes two key things:
That people would make “better” choices if they weren’t influenced by cognitive biases.
That choice architects can reliably determine which choice is best for people without introducing their own biases.
Critics of libertarian paternalism argue that nudging rests on an overly simplistic view of decision making—one in which people are assumed to make mistakes unless guided toward the right choice (Barton & Grüne-Yanoff, 2015; Gigerenzer, 2015).
This leads to a larger ethical concern (which I’ll get into shortly): Nudging doesn’t always just replace individual decision making with external guidance—sometimes it simply replaces one form of motivated reasoning (the decision maker’s) with another (the choice architect’s). But before I discuss some of the ethical issues, let’s first talk about the effectiveness of nudging.
Effectiveness of Nudging
Nudging has been widely adopted across public policy, business, and health interventions, largely because it’s seen as a low-cost, non-coercive way to improve decision making. By subtly shaping the decision environment, nudges can help people save more for retirement, eat healthier, or make more sustainable choices—all without restricting their freedom to choose otherwise.
Yet, nudging doesn’t always work as intended. While some nudges successfully shift behavior, others fail outright or produce unintended consequences (Hauser et al., 2018; Medina, 2020; Tor, 2018). Understanding why nudges work in some cases and not others is key to determining whether they’re truly an effective tool for improving decision making.
Nudges tend to be effective when they align with what people already want to do but struggle to follow through on due to inertia, uncertainty, or procrastination. They work particularly well in situations where:
People lack strong preferences: If a person has no strong opinion, they’re more likely to go along with the nudge (e.g., opting into a default retirement plan).
The nudge removes friction for an action people already want to take: Nudging is especially effective when people want to make a particular choice but fail to follow through due to effort or forgetfulness.
The decision is low-stakes or routine: Small nudges, like calorie labels or rearranging cafeteria food placement, tend to work reasonably well in everyday decisions rather than in deeply personal or high-stakes choices.
Opt-out retirement savings enrollment has been one of the most widely cited nudging successes (Chalmers et al., 2019; Cribb & Emmerson, 2019; Pereira & Afonso, 2017). By making savings the default option, participation rates increase significantly—even though people are still free to opt out. This works because many people intend to save for retirement but fail to take action on their own. The nudge eliminates the friction of enrollment, allowing people to do what they already believe is beneficial.
Another well-documented example is social norm messaging, where people are more likely to engage in a behavior if they believe others around them are doing the same (Goldberg et al., 2020; Mou & Lin, 2015; Zhou et al., 2020). For instance, messages stating that “Most people in your neighborhood recycle” have been shown to increase recycling rates. Since many people are willing to recycle but may not prioritize it, social proof nudges can encourage follow-through.
But, despite some successes, nudging is not universally effective. When a nudge clashes with people’s beliefs, values, or autonomy, it’s more likely to fail—or even backfire. Nudges tend to be ineffective or counterproductive in situations where:
The nudge conflicts with deeply held beliefs or identities: If a nudge conflicts with personal values, people may resist it—even when the behavior is beneficial.
The nudge is perceived as manipulative: If a nudge is too transparent or feels coercive, people may deliberately reject it out of principle.
The nudge is not actually in the decision maker’s best interest: Nudges assume what is best for people, but if that assumption is incorrect, the nudge can fail or be ignored.
One example of nudging failure can be seen in public health messaging during the COVID-19 pandemic. Early nudges to encourage social distancing and mask-wearing were met with resistance in some communities, particularly where individual freedom and skepticism of government authority were strong cultural values. Rather than nudging behavior in the intended direction, these interventions triggered pushback—reinforcing opposition rather than reducing it.
Environmental nudges have also faced resistance when they conflict with political identity. In some cases, when people were automatically enrolled in green energy plans, a subset of individuals opted out at higher rates, likely due to skepticism toward the intervention or distrust in the implementing authority (Moncreiff, 2017). Instead of shifting behavior, the nudge backfired by activating resistance.
At its core, nudging assumes that people make poor choices due to cognitive biases and need subtle interventions to correct them. But this assumption overlooks an important reality: People’s choices are often not driven by irrationality—they’re shaped by personal values, priorities, and motivations.
Nudging works best when people are indifferent or already inclined toward the intended behavior. But when a nudge clashes with a person’s strongly held motivations or identity, it’s far more likely to fail or even provoke an effect opposite to the one intended.
This leads to a key limitation of nudging: It’s not a tool for changing minds—it’s a tool for guiding behavior. When nudging is aligned with a decision maker’s preferences (or at least not misaligned), it can be quite effective. But when it attempts to override preexisting motivations, it often falls apart. But there’s a second concern beyond simple resistance or general ineffectiveness—what happens when nudging works too well?
When people become accustomed to being nudged toward the right decision, they may begin offloading decision making entirely to external entities—whether that’s a choice architect, an AI system, or an automated process. Instead of critically evaluating options, individuals may default to following the nudge without truly engaging with the decision itself.
We see a parallel issue in research on Advanced Driver Assistance Systems (ADAS). Studies show that when drivers rely too much on automated safety features, they can become less attentive to the road (Dunn et al., 2019; Westbrook, 2024). Instead of remaining actively engaged, drivers assume the system will correct their mistakes, sometimes leading to overconfidence and reduced situational awareness.
A similar dynamic could emerge in decision making more broadly. If people are repeatedly nudged toward better decisions, do they remain active decision makers, or do they start deferring to external guidance? Over time, the very interventions designed to help decision makers might reduce their ability—or willingness—to critically assess choices on their own (an issue Gigerenzer, 2015, discussed).
So, we know nudging can be effective in some situations, and it may even perhaps be too effective in others, leading to unintended consequences for human decision making. But even if nudging could be an effective tool in a given situation, that doesn’t mean it’s ethical. That’s a different question.
Ethics of Nudging
Even if nudging can be an effective tool for guiding behavior, its ethical implications remain a point of contention. Unlike traditional regulations, which impose clear restrictions, nudges subtly shape behavior while preserving the illusion of free choice1. But this raises a fundamental question: If nudging works by steering people toward a specific choice, does it truly respect individual autonomy?
Nudging is often framed as a “soft” intervention—one that merely makes the better choice easier without eliminating alternatives. Yet, the more effective a nudge is, the more it removes the need for conscious decision making. Instead of prompting individuals to think critically, nudges can encourage habitual, automatic responses to pre-structured decision environments (Hansen & Jespersen, 2013).
This becomes an ethical issue when people are unaware they’re being nudged. If a nudge is truly benign, shouldn’t it be transparent? And if people wouldn’t choose a particular option without the nudge, does that suggest the intervention is subtly coercive?
Consider default settings on privacy policies. Many digital platforms automatically opt users into data sharing agreements under the assumption that most will not take the time to adjust their settings. While users technically have the option to opt out, the default exploits inertia and inattention to achieve an outcome that benefits the platform. If nudging is truly about helping people, shouldn’t it be used to enhance informed decision-making rather than bypass it?
Proponents of nudging might argue that this isn’t a true nudge because the intervention clearly serves the company’s interests rather than the user’s. In other words, intent matters2. If the goal of a nudge is to help people make better decisions for themselves, then cases where the primary beneficiary is the entity designing the nudge—rather than the decision maker—should not qualify as true nudging.
However, in many real-world cases, the line isn’tt so clear. For example, default retirement savings plans are often seen as a positive nudge. Yet, who determines the ideal savings rate? Is it truly optimized for the individual worker, or for broader economic policy goals (Barton & Grüne-Yanoff, 2015)?
Similarly, environmental nudges—such as default enrollment in green energy plans—have sometimes led to unexpected backlash, with individuals opting out at higher rates when they perceive the intervention as politically motivated (Moncreiff, 2017). This gray area between guiding behavior and exploiting it is at the heart of the ethical debate.
While nudging is often justified as a means of helping individuals make better decisions, nudging can also reflect the motivated reasoning of those designing the nudge. What choice architects believe is best for the decision maker is shaped by their own biases, values, and assumptions (Grawitch, 2025)—which are not necessarily aligned with those of the decision maker3.
For instance, policymakers who favor public health interventions may believe that nudging people toward certain diet or exercise habits is in their best interest. But individuals may prioritize different values, such as convenience, personal enjoyment, or cultural food preferences. Similarly, pension plan defaults assume that long-term financial security is a universal priority, overlooking the fact that some individuals may have immediate financial needs that take precedence over retirement savings.
Finally, Gigerenzer (2015) raised the issue of what happens when nudges replace education rather than complement it. This concern is particularly relevant in cases where people need to develop long-term decision-making skills, rather than simply being steered toward a single choice. For example:
Financial literacy vs. default retirement enrollment: Nudging people into savings plans ensures higher participation rates, but it doesn’t teach them how to manage money effectively.
Calorie labeling vs. health education: If people reduce calorie intake due to nudging alone, are they making healthier choices, or just responding to a label without understanding broader nutritional needs?
If nudging replaces education, people may become more dependent on external interventions rather than improving their own ability to make informed decisions. This suggests that nudging is not simply about helping people overcome cognitive biases—it may also be about controlling behavior in a way that leaves individuals with less agency over time. And this could contribute to some unintended consequences (which I mentioned earlier).
Ultimately, the ethics of nudging depend on transparency, autonomy, and intent. Some nudges are relatively benign—helping people act on their own intentions by reducing friction. Others, however, may manipulate behavior in ways that serve the interests of the choice architect rather than the decision maker.
This raises several ethical concerns:
When does a nudge become a shove? At what point do decision makers feel pressured into a specific option?
How transparent should nudges be? If a nudge is truly in someone’s best interest, why not make it explicit?
Does nudging lead to passive decision making? If people always rely on nudges, do they lose the ability to critically evaluate choices?
While libertarian paternalists argue that nudging helps people without coercion, critics argue that any intervention that intentionally alters behavior without conscious awareness is ethically questionable (Bovens, 2009). As nudging becomes more pervasive in policy and business, the question is not just whether it works—but whether it respects the autonomy of those being nudged.
Final Thoughts
Nudging is often presented as a gentle, non-coercive way to help people make better choices. Unlike traditional regulations, which impose strict limits, nudges preserve freedom of choice while subtly guiding behavior. Yet, as we’ve seen, nudging is never neutral—it reflects the values and priorities of the choice architect, regardless of their alignment with those of the decision maker.
At its best, nudging removes friction from decisions people already want to make—helping them follow through on their intentions when inertia or uncertainty might otherwise prevent them. At its worst, nudging replaces active decision making with passive compliance, making people less engaged in their choices over time. The effectiveness of nudging depends on whether the decision aligns with the person’s existing goals or contradicts their deeply held beliefs. When nudges reinforce decisions people already lean toward, they work well. But when nudges attempt to override individual motivations, they’re more likely to fail or backfire.
Beyond effectiveness, the ethical debate around nudging remains unresolved. If a nudge is so effective that few people deviate from it, does that suggest successful guidance—or does it indicate a form of behavioral control that limits meaningful engagement with the decision? If nudging replaces education, does it make decision makers more passive and reliant on external interventions over time? And most importantly, who gets to decide what’s in someone’s best interest—and should they?
Ultimately, nudging raises a deeper philosophical question: Does shaping choices without explicit awareness truly respect autonomy, or is it simply a softer form of control? As nudging continues to expand in policy and business, we must consider not only whether it works—but whether it reinforces decision-making competence or undermines it in the long run.
While technically free choice is preserved, the increase in technology capabilities to allow for hypernudging, an issue that Mills and Sætra (2022) discussed and one I will be discussing more soon in the context of AI and human decision making.
By definition, nudging—as outlined by Thaler and Sunstein—should prioritize the decision maker’s best interest. However, this definition is conceptually problematic because it classifies an intervention as a ‘nudge’ based on intent rather than the actual tactic employed. If the same behavioral intervention is considered ‘nudging’ when it benefits the decision maker but ‘manipulation’ when it benefits the choice architect, then the framework confounds motive with method rather than assessing the tool itself. This parallels a broader philosophical issue in ethics—whether actions should be judged by their intent (deontology) or by their actual mechanisms and effects (consequentialism). The danger here is that by focusing on intent, the nudge concept obscures the reality that the same tactics can be used for both benevolent and exploitative purposes.
And so, this begs the question: If a choice architect can reason themselves into believing that their preferred outcome is in the decision maker’s best interest—no matter how many logical gymnastics are required—does that automatically qualify it as nudging? If so, does that mean any intervention can be justified as a nudge as long as the architect convinces themselves of its benevolence? And if not, who gets to make that determination?