The Limits of Rationalism
What the pursuit of perfect reasoning reveals about the messy, un-engineerable nature of human judgment.
You can also hear AI Matt’s summary of the piece below.
Imagine a group of people who decide they want to make better decisions. Not just marginally better, but systematically better—free from bias, guided by evidence, always updated in light of new information. They meet up to practice spotting cognitive pitfalls. They run workshops on how to weigh options more clearly. They even invent clever exercises with names like “double crux” and “Murphyjitsu,” designed to sharpen their reasoning and keep them from fooling themselves.
There’s no need to imagine such communities because they already do1. They call themselves rationalists, and at first glance, this sounds like a noble project. Who wouldn’t want to think more clearly and to eliminate those cognitive biases we’ve been told—over and over—lead us to make errors?
And certainly, I wouldn’t be one to criticize the idea of making better decisions. After all, I teach a graduate course that tries to help students better understand their own decision making and how to improve it. And that course teaches some of the same ideas and similar tools to those taught within the rationalist community.
Now you’re probably wondering what the problem is here. Well, when you try to build a whole worldview around pure rationality, something curious happens. The more you chase the ideal of perfect reasoning, the more you collide with the limits of human cognition—and the less rational the whole enterprise begins to look. Because it turns out the rationalist ideal has some of the same problems as other grand intellectual projects: inspiring in principle, impossible in practice.
What Is Rationalism, Anyway?
Before we go too far, it’s worth pausing to say what the “rationalist community” actually is. The name itself can sound lofty or even a little presumptuous—as if everyone else is wandering around in a fog while they alone have found the light. In practice, though, it’s more down-to-earth: a loose network of people who want to apply ideas from probability theory, behavioral economics, and cognitive psychology to everyday life.
And several such communities exist, largely growing out of Eliezer Yudkowsky’s writings on AI risk and rationality in the mid-2000s. His essays, later compiled into The Sequences, became the intellectual backbone of the movement, laying out a worldview that mixed Bayesian reasoning, cognitive bias awareness, and a sense of existential urgency about artificial intelligence. In 2009 these writings coalesced into LessWrong, an online forum that became the hub for discussion and debate. From there, groups like the Machine Intelligence Research Institute (MIRI) and the Center for Applied Rationality (CFAR) spun off, turning the community’s ideas into research programs and training workshops.
Their core principles can be summed up pretty simply:
Applied Bayesianism: Treat beliefs as probabilities and update them as new evidence comes in (to establish accurate beliefs or what rationalists refer to as epistemic rationality).
Bias awareness: Learn the long list of cognitive pitfalls psychologists have identified and try to avoid them.
Instrumental rationality: Focus not just on having accurate beliefs, but on making better decisions in practice.
From these basics flows a whole ecosystem of exercises and workshops. CFAR training sessions, for example, have participants practice things like “double crux” (a method for finding the key disagreement in an argument), “Murphyjitsu” (a premortem exercise for anticipating failure), and “trigger–action plans” (habit-forming if–then rules). Much of this borrows from legitimate psychology, behavioral science, and decision theory, though often reframed in the community’s own jargon2.
Taken at face value, this is a reasonable project. Who wouldn’t want to make fewer mistakes, argue more productively, or follow through on goals more reliably? I use some of these same techniques in the graduate course I teach on decision making. But when rationalism becomes more than a toolkit—when it becomes the foundation for a community, an identity, even a worldview—that’s when the problems begin to show.
When the Ideal Conflicts with Reality
There’s nothing wrong with the idea that we should understand our biases or look for ways to improve decision making. What the rationalist community often does, though, is take useful concepts and inflate them into universal formulas—which leads to predictable problems when they’re applied wholesale. Their core principles sound straightforward enough, but when applied in practice, each runs into trouble.
Applied Bayesianism. In principle, updating beliefs as evidence accumulates is a sound idea. In practice, you have to start with a “prior”—and in real-world settings that prior is usually subjective, intuitive, or just pulled out of thin air. Statistically, a common heuristic is to default to 0.50 (a coin flip) when you don’t know the base rate3. But that doesn’t magically solve the problem; it just glosses over the fact that you’re guessing. Rationalists talk about the math as if it guarantees rigor, but without a stable, defensible prior, you end up with the appearance of objectivity for something that is, quite frankly, a subjective estimate most of the time.
Now, rationalists would argue that this criticism misses the point—that Bayes’ rule isn’t about getting the prior “right” but about updating consistently from wherever you start. And to be fair, we are natural Bayesians: people do tend to adjust their judgments in light of new information. But that doesn’t remove the subjectivity from the starting point, or from the way we update (or fail to update) once evidence arrives. Bias strength often shapes both where we begin and how much weight we give to new data.
Rationalists like to emphasize that evidence will eventually wash out the subjectivity, but that assumes evidence is clear, reliable, and abundant. In practice, it’s messy, selective, and filtered through interpretation. So while Bayes Rule works beautifully in idealized conditions, in the real world it risks becoming an exercise in multiplying guesses by guesses.
Bias awareness. Knowing about biases can be useful, but only when applied carefully. Biases are nothing more than decision-making tendencies—leanings of the mind that orient us toward some options over others (e.g., a tendency to rely on anchoring, a tendency to weigh recent experiences more heavily). But as I’ve argued before, the notion that biases are universally associated with error is simply wrong—there isn’t even solid evidence that they cause errors most of the time. In fact, biases often serve an adaptive function: they give us a starting point, reduce cognitive load, and help us make satisficing choices more efficiently.
They also guide us in line with our values: when we make decisions, our underlying beliefs and preferences act as biases, orienting us toward choices consistent with those values. That’s not a flaw—it’s a fundamental feature of how human decision making works.
Rationalists might counter that decades of behavioral research show biases leading to systematic errors—so awareness and correction are necessary. But most of those findings come from highly controlled lab tasks where the “correct” answer is predefined, not from real-world environments where speed, satisficing, and value alignment matter. In practice, the same “bias” that looks like an error in the lab often produces accurate, efficient judgments outside it. What matters isn’t eliminating biases but understanding when they’re adaptive and when they’re misapplied.
Instrumental rationality. The desire to make better decisions in practice is laudable. But here again, rationalist methods often reframe subjective judgments as if they were purely mechanical. Exercises like “double crux” or “Murphyjitsu” can sharpen reasoning—but they don’t eliminate the fact that decisions are made in social, emotional, and ecological contexts. We don’t make choices in a vacuum; we make choices based on a frame of reference.
And that matters, because the “right” decision often depends as much on the context within which the decision is being made as it does on strict logic. A career move that looks “ideal” in terms of salary might be turned down if it undermines family life. A treatment plan that’s statistically most effective might be rejected if it conflicts with a patient’s values. Even everyday choices—like whether to confront a colleague—can hinge less on maximizing outcomes than on preserving trust or signaling respect.
Now, a committed rationalist might argue that these examples reflect irrationality—that rejecting a higher salary or a statistically superior treatment based on our values is irrational. But that objection assumes the only thing that matters is maximizing one metric. In reality, values, relationships, and social trust aren’t noise to be eliminated; they’re part of the decision criteria. What looks like “irrationality” from a narrow frame is often rational once you acknowledge the full context in which human beings actually live and make choices (Smets, 2024; 2025).
Taken individually, these principles have some merit. Bayes’ rule, bias awareness, and structured reasoning exercises can all be useful tools. But when they’re inflated into a totalizing framework—a universal recipe for how humans should think—that just doesn’t work very well.
Real-world decision making is messier, more context-dependent, and more value-laden than the rationalist ideal allows. And when that ideal collides with reality, something interesting tends to happen: rather than producing perfectly rational actors, communities organized around “pure rationality” start to resemble something else entirely—groups bound together by a shared creed, their own jargon, a sense of being in possession of a higher truth, and a general contempt for non-adherents. In other words, they begin to look less like scientific communities and more like quasi-religious ones.
When Rationality Becomes a Social Identity
It’s one thing to critique the rationalist ideal on intellectual grounds. But the more striking problem isn’t conceptual—it’s social. When you build a community around “pure rationality,” it rarely stays a neutral toolkit. It becomes an identity. And then the same tenets that seem neat in theory start to warp the community in practice.
Take Bayesian reasoning. In principle, it’s about updating beliefs with evidence. But in the rationalist community, it often takes on a different role: it becomes a way of dressing up subjective priors—sometimes rooted more in fear or imagination than in data—with a veneer of mathematical rigor.
The movement’s history itself illustrates this. Concerns about an existential AI threat were cast as Bayesian estimates, but the “priors” behind them often came less from empirical baselines than from sci-fi anxieties—think Skynet by another name. As Caplan (2017) pointed out, this pattern shows up repeatedly: scenarios borrowed from speculative sci-fi get inflated into near-certainties once filtered through the rationalist frame.
The same dynamic appears in bias awareness. The community often treats bias as something to be purged, but one of its most consistent biases is toward utilitarianism (or consequentialism). And when that bias gets paired with a hyper-rational style—stripped of other ethical commitments like respect for human dignity—it can justify all kinds of motivated reasoning.
That’s the path that leads, at the extreme, to the kinds of strange offshoots and cult-like subgroups outsiders now point to—the “Zizians” being only the most colorful example. But rationalist ideology appears to have also been deeply influential in the murder of the United Healthcare CEO, where the alleged killer cited rationalist-inspired arguments about maximizing good and reducing suffering. In that case, a framework that begins as an intellectual exercise in “bias removal” ended up functioning as a moral license for violence (Lin & Jarvis, 2024).
And instrumental rationality, rather than keeping decision making grounded, can tilt toward ritualization: structured exercises like “double crux” or “Murphyjitsu” becoming less about sharpening thought and more about reinforcing group identity. What begin as tools for clear thinking start to function like liturgies—practiced repeatedly, often in group settings, with their own specialized jargon and scripts.
The irony is hard to miss: the pursuit of pure rationality ends up producing its own set of biases, priors, and rituals—just under a different banner. At that point, what’s on display isn’t the elimination of bias but the construction of a new one: a bias toward methods that signal rationality, regardless of whether they actually improve judgment. In other words, rationalist exercises risk becoming less about making better decisions and more about belonging to the kind of community that believes it makes better decisions.
It’s no wonder, then, that outside observers have noticed something religious in all this. As Sargeant (2018) wrote in an issue of Plough Quarterly (a Christian-oriented publication), rationalism can look a lot like a “simulated religion”: it borrows the trappings of faith—rituals, shared language, an overarching sense of meaning—but replaces God with Reason. To a Christian audience, this resemblance is obvious, but it’s striking that even a movement built on rejecting religion seems to re-create its social functions almost wholesale.
When Rationality Starts to Look Like a Cult
My critique of the movement isn’t unique to me. Journalists and observers have pointed out that, far from inoculating themselves against the pitfalls of groupthink or ideology, rationalists often seem unusually prone to them.
Brennan (2025) wrote an in-depth piece on “rationalist cults” for Asterisk Magazine, where he pointed out some of the many parallels between some rationalist sects and more recognized cults. What he found echoed the dynamics I’ve described: speculative priors that take on the weight of dogma, ethical reasoning that drifts into utilitarian absolutes, and community rituals that start to look more like liturgies than decision-making tools.
He also noted the familiar social architecture that often emerges: groups orbiting around a charismatic leader, norms that encourage insulation from nonmembers, and a shared sense of privileged insight into how the world really works. Combined with the rituals and jargon, those features make the resemblance to recognized cults more than superficial.
Not every encounter with rationalist ideas slides in that direction, of course. As Kahn (2016) noted in her profile of the CFAR, many participants were simply earnest in their desire to make better choices. She herself admitted that some of the CFAR exercises helped her spot cognitive pitfalls in her own life. For the casual rationalist—someone who dips into online discussions, reads LessWrong essays, or attends a weekend workshop—the appeal is mostly intellectual and the risks of cult-like capture would seem to be minimal.
But the dynamic shifts when rationalism starts being a way of life. In places like the Bay Area, whole communities have coalesced around rationalist principles, complete with shared housing, local meetups, and quasi-utopian ambitions. It’s in those contexts that the cult parallels grow most pronounced. And the dangers, as Brennan and others warn, are no different from those that accompany any other ideology once it becomes closed, insular, and self-reinforcing.
Why Rationalism Recreates the Very Things It Rejects
The deepest problems with rationalism aren’t about the math or the exercises—they’re about what gets excluded. Two patterns stand out.
First, rationalists often reject the very things that make us human: biases, values, emotions, motivated reasoning. These aren’t bugs to be eliminated; they’re features of how people think and choose. Trying to purge them doesn’t make them disappear. It just leaves people pretending they’ve transcended their own psychology. And the result can be decision makers who look more like the cold, calculating AI they claim to fear.
Second, the community often treats utilitarianism as the ultimate moral framework. By focusing narrowly on consequences and “the greater good,” they risk sidelining other ethical commitments—especially respect for human dignity. The combination of rejecting human psychology on the one hand and elevating consequentialism on the other creates a dangerous illusion: that their reasoning is purely objective.
Put those two tendencies together and you get a troubling possibility: a community convinced that whatever it decides in the name of the greater good must be correct, because it’s been scrubbed of “bias.” That’s not a recipe for error-free decision making. It’s a recipe for overconfidence dressed up as objectivity. And at that point, the rationalist dream starts to look like Skynet, but with some made up Bayesian priors.
And when a community builds its identity around those twin illusions—of transcending human psychology and of holding the one true moral framework—it’s not surprising when it begins to look cult-like. The conviction that “we alone see clearly” is the seed from which charismatic leaders, rituals, and insider–outsider boundaries inevitably grow.
Superforecasting: The Crown Jewel of Rationalism?
When I first set out to write this section, my plan was to show that rationalism does have a place, and that superforecasting—popularized by Tetlock and Gardiner in a book by the same name—was that place. Here, finally, was a practice that seemed to deliver: rather than debating abstractions, superforecasters put numbers on their beliefs, tested those numbers against reality, and refined their methods over time. Perhaps it offered a way to discipline judgment—yielding probabilities of outcomes and leaving it to human decision makers to decide, for example, whether a 30% chance of something is high enough to act on.
But the deeper I dug, the more I realized superforecasting may be more smoke and mirrors than anything profound. The math can look precise, but in practice, it often rewards triviality and misses what really matters. And here’s why:
Expert boldness ≠ accuracy. Tetlock’s original and most valuable finding was that experts are usually wrong, in part because they’re rewarded for making bold, eye-catching predictions (who wants to predict that the status quo will remain). The very incentives that make them visible also make them unreliable (Recht, 2023).
Crowds do almost as well. A ClearerThinking study showed that aggregated forecasts from Amazon Mechanical Turk workers were only slightly less accurate than professional superforecasters (Moore, 2018)—suggesting that “elite” forecasting skill may be mostly wisdom-of-the-crowd and some careful scoring.
Calibration hides failures. The Brier scores that measure forecasting accuracy reward being “well-calibrated” across many questions. That means you can be mostly right about trivial events and still score well even if you miss the big, high-impact ones—like Brexit, Trump, or the early trajectory of COVID-19 (Hendrix, 2023; Jeremiah, 2020).
Short horizons only. Even Tetlock admits accuracy decays quickly as the time horizon expands (Richey, 2025). Superforecasting might work for “what will Congress do in six months?” but not for “will AI destroy humanity in 50 years?”.
Both Jeremiah (2020) and Recht (2023) offer excellent critiques. Meanwhile, Richey (2025) has been expanding his critique in serial form, releasing chapters on his Substack. Finally, Hendrix (2023) offers a bit more of a tempered critique (accompanied by lots of visuals, if you like that sort of thing).
In the end, it seems that predicting the future is fraught with error—unless your prediction is simply that tomorrow is likely to look much like today. That may be useful in some cases, but it’s hardly the sweeping triumph rationalists often imagine.
So, Where Does That Leave Rationalism?
For all the math, exercises, and clever jargon, rationalism doesn’t escape the basic constraints of human judgment. Biases, values, emotions, and social context are part of what makes us human decision makers. When rationalists dismiss these elements, they risk recreating exactly what they fear: rigid dogma masquerading as objectivity.
The movement’s “crown jewel,” superforecasting, turns out to illustrate the same problem. At best, superforecasting teaches us that most experts are wrong, that crowds can be surprisingly wise, and that prediction is more difficult than it looks. At worst, it tempts us into believing the future is tame enough to be reduced to tidy probabilities.
None of this means rationality has no value. Tools like probabilistic thinking, awareness of biases, or structured debate exercises can sharpen judgment in the right contexts. But those tools are most powerful when they’re used with humility—when we admit that context, uncertainty, and human messiness don’t vanish just because we’ve put numbers on them.
In the end, the promise of rationalism isn’t that it can free us from the limits of human reasoning. It’s that it can remind us of those limits, and help us navigate them a little more carefully—without pretending they can be engineered away.
It would actually be inaccurate to argue that there’s one central rationalist community. The term actually describes a broad, diverse, collection of groups united by a shared interest in rationality, but with varying cultures and beliefs.
One could argue that the use of community-specific jargon makes a lot of what is discussed and how it’s discussed inaccessible to those outside the community. I suppose it’s possible this is intentional, but I can’t say one way or another.
What’s especially ironic here is that heuristics are often lumped together with biases and treated as inherently error-prone. Yet when faced with unknown priors, rationalists rely on a heuristic themselves. In other words, the very tools they’re quick to dismiss end up sneaking back in through the side door.
Another great piece, Matt. This is very illuminating but also a bit disturbing. As I started reading this piece, quite early on the words "idealism, ideology, identity" popped up in my head. And soon enough you come to discuss such points in the piece. I have long held the idea that any such group endeavor, even ones that are founded to reject certain "problems" as in this case, end up paradoxically reinforcing the same in a different garb. Thus atheism becomes as much a religion as the religion it rejects, movements of "peace" and "tolerance" end up becoming militant, the list goes on.
These 'grands projets' always have a limit? After all, wasn't the French revolution, with its violence and excesses, essentially a product of such a movement, one of top-down rationalist application.
I think I'll stick to that Groucho maxim and keep away.