Are We All Just Hypocrites?
Kind of—but not for the reasons you think.
This post incorporates and expands on content I previously published on Psychology Today.
You can also hear AI Matt’s summary of the piece below.
What have the Romans ever done for us?
It starts as a protest. It ends as a list. In Life of Brian, a group of would-be revolutionaries denounce the Roman Empire—only to slowly, grudgingly admit that the Romans did, in fact, provide sanitation, roads, medicine, education, wine...
And suddenly the outrage doesn’t feel quite so righteous.
It’s funny because it’s familiar. We often take strong positions—until a different value tugs at us. Then we pivot, recalibrate, and somehow make the new position feel just as principled as the old one.
From the outside, that looks an awful lot like hypocrisy. But what if it isn’t?1
What if, instead of being morally inconsistent, we’re just context-sensitive? What if the “double standard” isn’t really a failure of character—but a reflection of which values happen to be activated in that moment?
In this post, I’m going to argue exactly that: much of what we call hypocrisy isn’t about dishonesty or bad faith. It’s about values in conflict, and how we subconsciously prioritize some over others depending on the situation. And yes, that includes the moral high ground… right up until someone reminds us about the aqueduct.
First Up: Moral Values
All of us hold values, but we don’t all assign them the same weight. And even our own values don’t carry equal weight across situations. Whether you’re using Haidt’s Moral Foundations Theory (MFT), Curry’s Morality-as-Cooperation (MAC) theory, or just your own internal sense of “this feels right,” values shape our behavior. But they don’t always show up at the same time or with the same force.
Verplanken and Holland (2002) demonstrated that values are situationally activated—meaning they become more salient in response to the cues present in a given context. The more central a value is to our sense of self, the more likely it is to be activated when it’s relevant to the situation at hand. And once activated, it shapes how we interpret that situation and steers our behavior. If a value isn’t central, even relevant cues may not fully bring it online—especially if something else feels more urgent or compelling in the moment.
Take compassion. If compassion is central to your self-concept, you’ll likely act compassionately in a wide range of situations where compassion could be relevant. But if it’s more peripheral, you might act compassionately in some cases… and not in others, especially if a stronger value—like security or hedonism—gets activated instead.
What that means is that behavior often isn’t driven by fixed moral rules. It’s driven by which values are active in the moment. When the frame of reference changes, different values get activated, and those values bias our decision making—nudging2 us to interpret and respond in ways that align with whichever values are most active in the moment.
Let’s use Deci and Ryan’s Self-Determination Theory (SDT) as a context to illustrate this. SDT identifies three basic psychological needs that drive our motivation: achievement, affiliation, and competence. Depending on whether we’re pursuing achievement, affiliation, or competence, our mindset—and the values we lean on in that pursuit—can shift. We may be chasing all three at once, but if a conflict arises, one tends to take the lead.
Here’s where it gets messy: we don’t just carry personal values (more self-interest-based values like those mentioned by SDT). We also carry relational values—those that help us get along with others (like those mentioned in MFT or MAC)—and these are the foundations of moral behavior. And so, we pursue our self-interests, but generally in ways that accommodate relational values—because cooperation often serves our self-interests, too.
But when values conflict, the outcome depends on which value is louder in that context. And louder doesn’t mean “more moral”—just more salient.
Let’s Welcome Self-Serving Bias to the Story
We’re generally motivated by our personal values—the ones tied to our goals, needs, and self-interests. But we don’t live in isolation. We navigate a social world, and cooperation often serves our long-term interests, too. So, while we chase personal goals, we usually try to do so without completely violating our relational values.
MAC theory outlines seven relational domains that help structure moral behavior: family, group loyalty, reciprocity, heroism, deference, fairness, and property.3 Just like personal values, these relational values don’t always agree with each other. And that’s when things get interesting.
Take fairness. If someone you don’t know cheats on a test, you might be quick to say it’s unfair and demand consequences. But if a friend or family member does it? Suddenly the situation is more “complicated.” It’s not that your belief in fairness disappears—it just gets crowded out by another value, like loyalty or kinship.
And when values collide, we tend to resolve those conflicts in a way that protects our sense of self. That’s self-serving bias in action. It’s not that we don’t care about fairness—you’re not condoning what your cousin did. It’s just that, in this particular moment, we really care about our cousin not getting expelled, and so what we consider fair is filtered through that lens.
The strength of the self-serving bias depends on how big the gap is between the competing values. If the conflict between fairness and family is small, your interpretation might only tilt slightly toward your cousin. But if the conflict is large, the scale doesn’t just tip—it crashes down. Suddenly an outcome has to heavily favor your family member before it feels fair to you (see Figure 1)4.
In short, when the value of loyalty is loud and fairness is just politely raising its hand in the back, your brain doesn’t deliberate—it improvises5.
Motivated Reasoning: The Sidekick
Self-serving bias explains why we tilt our interpretations to protect our self-image. But that’s not enough on its own. Bias might bend the frame, but we still need a story to make the picture look right. Otherwise, we can’t justify behavior consistent with our bias. That’s where motivated reasoning comes in.
Motivated reasoning is the internal PR team that justifies our choices. It helps us explain why our behavior makes sense—even if we’d criticize someone else for doing the exact same thing. It’s not that we’re lying to ourselves (though sometimes we might be). More often, we’re genuinely applying different standards of reasoning to ourselves than we do to others.
This is what Hale and Pillow (2015) pointed out in their work on moral hypocrisy: it’s not just the behavior that’s inconsistent—it’s the rationale we use to excuse it. We use one logic for them, and another for us. Same behavior. Different story.
And while that sounds like a recipe for shameless double standards, there’s actually a kind of psychological efficiency to it. For the most egregious behaviors—things that violate both personal and relational values—motivated reasoning usually isn’t enough. Most of us can’t logically justify robbing a bank, so our decision not to rob a bank—and our condemnation of those who do—stays perfectly consistent. Not because we’re moral paragons, but because the cost-benefit math doesn’t work. The personal value payoff doesn’t justify the social or legal consequences.
But with more ambiguous (and less illegal) behaviors—cutting corners, stretching the truth, selectively enforcing standards—motivated reasoning has room to operate. It gives us just enough plausible justification to resolve the tension between behavior we might condemn in others and our own self-concept.
And because values are situationally activated, it’s not always clear that a double standard is even happening. It may feel like we’re being consistent—when really, we’re just responding to a different set of cues, with a different value in the driver’s seat, and a story already written to support it.
We might think this only plays out in private, personal moments—but it’s just as common in public discourse. Especially online, where value conflicts, self-serving interpretations, and motivated justifications get compressed into 280 characters and flung into the public square—often with plenty of seemingly hypocritical behavior just waiting to be called out.
We Are All Moral Hypocrites
Online environments make hypocrisy easy to spot—and even easier to perform. When we post, we’re often responding to what’s most salient in the moment: a news story, a trend, a threat to our group, or a chance to score social points. But what’s salient doesn’t always activate the same values every time. And when the value in the driver’s seat shifts, so does the behavior that follows.
From the outside, it looks inconsistent. From the inside, it often feels justified. We’re responding to a different context, with different priorities and different cues. Our reasoning doesn’t feel hypocritical—it feels situationally appropriate. We’re not contradicting ourselves; we’re just applying the same values differently depending on what’s most salient at the time.
That’s the real trick of moral hypocrisy: it rarely announces itself. It hides in the gap between what we say we value and which value wins out when push comes to shove in a given context. And unless we deliberately step back and examine that gap, it’s easy to assume others are being inconsistent while we’re just being reasonable.
This isn’t a call to excuse all double standards. Some contradictions really do reflect bad faith or moral posturing. But not all of them. Sometimes, what looks like hypocrisy is really just the byproduct of competing values—values that don’t always activate at the same time, in the same way, or with the same force.
So next time you catch someone contradicting themselves—online or off—it might be worth pausing before you pounce. Ask yourself what values might be in play. Ask what cues might be making one value louder than the others.
Because in the end, most people aren’t moral hypocrites because they lack integrity. They’re moral hypocrites because they’re human.
If it wasn’t, this would be a really short post.
And sometimes it’s more forceful than a nudge—more of a shove, if you will.
The specific meaning of each isn’t important for our discussion, but Curry (2019) offers a full description of each one.
I also discussed this same issue when explaining why it is unreasonable to expect policy makers to “follow the science”.
Anecdotally, we see this play out frequently in the criminal justice system, where family members of victims and defendants often have very different interpretations of the fairness of an outcome than do those who have no conflict of interest.



From my reading of Genome, I remember that certain genes get activated in certain scenarios. If that's how nature handles decision-making and behavior, I guess we're ok being adaptive in applying certain morals in certain cases. Thanks Matt for another enlightening piece.
Great piece, Matt. I’m utterly convinced. Thank you.