When Fluency Stops Meaning What It Used To
AI, bullshit, and the limits of fluency as evidence
This post is part of the ongoing series, Exploring AI and Its Intersection with Human Decision Making. It’s also Part 1 of a 3-part series on Writing, Thinking, and AI.
You can hear AI Matt’s summary of the piece below.
I don’t usually click on posts that tell me AI is changing how we think. Most of them recycle the same worries1—just dressed up in fresher metaphors or anchored to some new research finding that likely doesn’t generalize.
But one Saturday morning on X, a post caught my eye—and quite frankly, caught me off guard2.
In a short post, Earp (2026)3 describes how working with AI-assisted manuscripts has changed the way he reads. Not in some dramatic, dystopian sense—but in a subtler, more unsettling way. He’s slowed down. He’s become less impressed by prose that reads fluently and more suspicious of what does—or doesn’t—lie beneath it.
His conclusion: BC (Before ChatGPT), fluency was a useful shortcut for judging argument quality. PC (Post-ChatGPT), it's worse than useless—it's misleading. The cues still look the same, but they no longer mean what they used to.
And that felt oddly familiar.
Why Fluency Was Ever a Shortcut
Fluency didn’t become a shortcut by accident. There’s robust evidence that people use the ease of processing information as a cue to its truth and quality—a phenomenon psychologists call processing fluency. Statements that are easier to read or that feel familiar are more likely to be judged as true or credible (Nahon et al., 2021; Reber & Unkelbach, 2010), even when they are not (Hassan & Barber, 2021).
In expert domains like academic writing, fluency usually correlated with effort: clear prose often reflected revision, domain knowledge, and conceptual engagement. It wasn’t air-tight evidence of quality, but it was a heuristic cue that tracked quality reliably enough to be worth trusting.
That last caveat is important. Fluency was never a guarantee of quality. Plenty of bad arguments have always been written well, and plenty of good ideas have always been expressed awkwardly. Earp is right to note that point, and it’s not a new one. Anyone who’s spent time reading academic journals has encountered papers that sound serious without actually saying much. There’s a reason Harry Frankfurt’s On Bullshit predates large language models (LLMs) by decades4—AI didn’t invent the genre.
Still, for a long time, fluency was informative. Writing that read cleanly usually meant someone had wrestled with the material long enough to impose some structure on it. Claims had been revised, hedged, reordered, or abandoned. Definitions had been clarified. Arguments had been stress-tested, if only implicitly, through the act of trying to explain them coherently to someone else.
That process didn’t ensure rigor, but it required effort. Writing that read fluently usually meant sustained engagement with the ideas and at least some familiarity with the field. Not perfect and not in every case. But serviceable—especially when time and attention were limited and you couldn’t unpack every argument from scratch.
In that sense, relying on fluency wasn’t intellectual laziness so much as a reasonable response to bounded rationality—limits on time and attention. Faced with more material than anyone could fully reconstruct from first principles, readers leaned on cues that had historically carried information. Fluency was one of them.
After all, the alternatives were limited. If you wanted to understand an argument, you either had to work through it yourself or rely on other people—reviewers, colleagues, editors—to help identify what mattered. Fluency, when it appeared, usually reflected human effort somewhere in the process.
The important point, though, is that this only worked because the environment supported it. The shortcut held because it was costly to fake.
That cost is gone now.
How AI Changes the Landscape
What AI changes isn’t the need for shortcuts. That need was always there. What it changes is the cost structure that once made certain shortcuts informative.
LLMs are unusually good at producing text that reads as if cognitive work has already been done. They smooth transitions, calibrate caveats, supply plausible structure, and assemble arguments that sound measured and informed. In other words, they’re very good at producing fluent prose—regardless of whether the underlying ideas are coherent, original, or even fully understood.
That produced an extremely important shift in the information landscape. Fluency no longer reliably reflects human effort somewhere upstream. It can now be generated cheaply, quickly, and at scale.
This is why the old heuristic breaks. When fluent text required sustained engagement with the material, trusting it made sense. When the same surface qualities can be produced without that engagement, the cue stops carrying the information it once did. The cues look the same, but they don’t mean what they used to. And so, in that respect, AI makes fluent bullshit much easier to produce.
It’s tempting to frame this as a story about deception or decline, but that misses the point. AI isn’t introducing bullshit into otherwise pristine systems. As Frankfurt made clear long ago, polished emptiness has always been with us. What AI does is make it easier to standardize and polish that emptiness—so that it clears thresholds it once struggled to reach.

At the same time, the change isn’t only negative. The same tools that can smooth out weak arguments can also help people translate ideas they genuinely understand but struggle to express. AI can lower barriers to entry for those who have something to say but lack the stylistic fluency to say it in accepted ways.
That dual-use reality is precisely why fluency can no longer do the filtering work we once asked of it. Whether a piece of writing reads well now tells us little about how it came to be and even less about what went into it.
The net effect is simple, if uncomfortable: fluency has been decoupled from effort. And that means any heuristic built around fluency has to be reconsidered.
Which leaves readers—and evaluators—in a different position than before.
Reading Without the Shortcut
If fluency has stopped doing the filtering work it used to do, reading has to change—whether we want it to or not.
Because prose can no longer be trusted as evidence that someone has already done the hard thinking, readers lose a convenience they’d been relying on—many of them for decades. The option to skim for coherence, nod along to familiar structures, and assume there’s something solid underneath becomes risky. What replaces it isn’t a new shortcut, but a return to slower, more effortful judgment.
That’s what Earp is describing. He isn’t reacting to AI outputs themselves so much as to what they’ve done to the reliability of surface cues. When fluent writing no longer tell us much about how an argument was produced, the only way to assess it is to reconstruct it. Claims have to be restated in plainer terms. Assumptions have to be made explicit. Conclusions have to be checked against what actually came before them.
In other words, reading becomes more active.
This doesn’t mean every sentence gets equal scrutiny or that every argument is rebuilt from scratch. Constraints haven’t disappeared. But the threshold for trust shifts. Prose that once passed with minimal resistance now invites a second look because fluency alone no longer earns the benefit of the doubt.
There’s an irony here. Much of the concern about AI focuses on how it might make writers lazier or less careful. What gets less attention is how it’s already forcing readers to be more careful5. Because the shortcut is no longer reliable, more effort is required to differentiate quality writing from bullshit. And that shift is uncomfortable precisely because it exposes how much we’d been leaning on that shortcut in the first place.
Reading without it feels slower. Less efficient. Sometimes even pedantic. But it also makes it harder for empty arguments to coast on presentation alone. If there’s nothing there, that absence shows up faster once the prose stops doing the convincing for it.
That’s not a solution, exactly. But it’s a change in posture. And it may be the most immediate consequence of AI’s impact on writing that we’re already living with.
Adapting to the New Reality
Earp’s response to this shift is insightful, but it’s unlikely to be universal. Some readers haven’t recalibrated at all. They continue to treat fluent prose as evidence that the hard thinking has already been done, even as the conditions that once supported that inference have eroded. For them, the risk hasn’t gone away—it’s increased.
Others have reacted in the opposite direction. They’ve become skeptical of anything that sounds like it might have been generated or smoothed by AI—em dashes, balanced phrasing, careful hedging, the whole familiar aesthetic. That skepticism can feel like vigilance. But it’s still a shortcut. And like the one it replaces, it’s weak. Treating “AI-ish” prose as a proxy for weak thinking risks dismissing work that’s careful but stylistically conventional, while letting genuinely empty arguments through if they learn to present themselves differently.
Neither response really solves the problem. One ignores the shift altogether. The other substitutes one surface cue for another.
What Earp describes is harder and less satisfying. It involves giving up a convenience without immediately replacing it. Reading more slowly. Reconstructing arguments that once seemed safe to skim. Accepting that fluency no longer earns the benefit of the doubt, but that stylistic suspicion doesn’t earn it either.
That posture isn’t elegant, and it’s certainly not efficient. But it’s a more honest response to the environment we’re now reading in—one where the cues we relied on still exist but offer a lot less utility than they used to.
If you found this post interesting or insightful and want to support my work, consider buying me a coffee.
I’ve written more broadly about related concerns elsewhere, including a piece on AI and the erosion of expertise. Interested readers can find that discussion there.
Thanks to John Peters for reposting the piece. Otherwise, I probably wouldn’t have stumbled onto it.
Yes, it’s a Substack post, but I came across it on X. So in the spirit of transparency, I mentioned where I originally saw it.
While the book version of On Bullshit was published in 2005, the original essay on which it was based appeared in 1986.
Except in academia, where it seems that many of the concerns are that even weak students will be able to produce exceptional writing.









This is a GREAT post - though my knowledge of fluency is different to yours?
I totally understand exactly what your argument about improving reading is - because checking AI Results for errors is EXACTLY what we all should do! And agreed - it does SLOW reading - and that’s because it changes the reading goal - which isn’t JUST to read the content - the goal has become - I need to check that AI text to see if it’s an error?
THANKS so much for raising and sharing an important point about reading AI texts! 👍
can confirm, but then I never wrote smoothly, and this can be styled into our algorithms for flowers too