AI Knows the Words, But Not the Music
Why AI’s Fluency Can Be Misleading and How to Make Smarter Use of It
This post is part of the series, Exploring AI and Its Intersection with Human Decision Making. It pulls inspiration from two previous Psychology Today posts: one in 2023 and one in 2024. For a more conversational overview, please check out the audio below.
AI is increasingly being integrated into decision making across industries. From healthcare and finance to law and hiring, AI-generated insights shape choices that affect people’s lives. But while AI systems are undeniably powerful, they often sound more authoritative than they really are.
As I outlined in the first post in this series, AI doesn’t actually know anything. It produces outputs based on statistical patterns, not understanding. Yet, because AI’s fluency—its ability to generate structured, coherent, and confident responses—is so high, people often mistake it for an expert1. This mistake can lead to very different types of errors depending on who is using AI:
Non-experts might treat AI as an authority, trusting its outputs without verification.
Experts might selectively accept AI’s outputs when they reinforce their own views while dismissing those that challenge them.
In both cases, AI isn’t making people smarter—it’s just amplifying their preexisting tendencies. In this post, I’ll explore why fluency is often mistaken for expertise, how both non-experts and experts can misuse AI in decision making, and how to help reduce the risks of misusing AI.
The Non-Expert Problem: AI as a False Authority
For non-experts, AI’s fluency creates an illusion of credibility. If something sounds confident and well-structured, it’s easy to assume it must be correct. This is especially problematic for people who lack the background knowledge to critically evaluate AI-generated responses.
Perhaps an analogy will help here. Imagine you have a neighbor with an enormous personal library—shelves stacked with books on every subject imaginable. But here’s the catch: he hasn’t actually read them. Instead, he’s memorized key passages and can recite them flawlessly. Ask him about quantum mechanics, and he’ll pull out an impressive-sounding explanation. Ask about ancient history, and he’ll quote multiple sources effortlessly. But if you press him with deeper questions—if you ask him to explain contradictions between sources or apply that knowledge in a novel way—he stumbles. He’s not reasoning through the information; he’s just parroting it back in a way that sounds authoritative.
AI works in a similar way. It doesn’t think or know in the way humans do. Instead, it generates responses based on patterns in its training data, assembling plausible-sounding statements without true comprehension. Just like our well-read but shallow neighbor, it can provide fluency without understanding—and that’s exactly what makes it so convincing.
AI-generated responses often mirror the way experts can overreach—fluent, confident, and persuasive, even when inaccurate. Just as experts might step outside their domain and present claims with unjustified certainty, AI generates responses that extend beyond its capabilities while maintaining an authoritative tone.
But there are two differences. First, AI is designed to generate plausible responses automatically—it doesn’t evaluate their accuracy2. Second, human experts, despite their flaws, don’t just process information; they selectively filter out what doesn’t matter, assess trade-offs, and adapt their reasoning based on context3.
This is why fluency can be so misleading: it’s a marker of clarity—not a guarantee of accuracy. Yet, people are more likely to trust information that’s clearly phrased, logically structured, and easy to read (Kauttonen et al., 2020; Wang et al., 2016). As a result, fluency can bias us toward seeing information as accurate and trustworthy—even when it isn’t. LLMs exploit this tendency by prioritizing plausibility over truth, making their outputs sound authoritative regardless of accuracy. This creates a dangerous trap, particularly for those without the background knowledge to critically evaluate AI’s responses.
People often turn to LLMs precisely because they lack deep knowledge on a topic. But that same lack of expertise makes them worse at spotting errors, increasing the likelihood they’ll take AI’s responses at face value rather than questioning them (Schreibelmayr et al., 2023; Shekar et al., 2024). Instead of critically engaging with AI as a tool, many users treat it as a substitute for expertise—accepting its outputs as fact without verification.
The Expert Problem: AI as an Echo Chamber
While non-experts are more likely to trust an LLM’s output outright, experts tend to use AI differently. There are typically two ways that experts tend to use AI:
To add nuance to expertise claims.
To reinforce their existing beliefs.
I’ll talk about each of these separately but spend more time on the second use as it has more potential adverse effects on decision making.
Let’s start off by discussing the use of LLMs as a tool for to add nuance to expertise-based claims. Essentially, this amounts to experts using AI to refine their argument—seeking additional perspectives or insights they might not have considered.
Use of AI for this purpose generally results from a recognition of the limits of one’s expertise. Experts who use AI to refine their arguments aren’t looking for simple confirmation; instead, they’re seeking alternative perspectives, supporting evidence, or counterarguments to help strengthen their reasoning. In these cases, AI serves as a tool for broadening rather than narrowing their viewpoint.
One way this plays out is when experts use an AI to test their assumptions or challenge their initial conclusions. LLMs can rapidly process large datasets, summarize different perspectives, or even highlight nuances that an expert might have overlooked. For example, a political analyst using Claude to assess public sentiment might ask it to generate arguments for and against a given policy, forcing the analyst to engage with viewpoints s/he might not naturally consider.
Similarly, researchers might use AI to identify gaps in existing literature or suggest possible explanations for emerging trends. For instance, archaeologists studying ancient trade routes might use AI to analyze satellite imagery and identify previously unknown settlement patterns. While AI can’t replace their expertise in interpreting historical significance, it can help uncover potential sites worth further investigation—refining where and how archeological expertise is applied.
However, there’s a caveat: AI’s fluency can still create a false sense of comprehensiveness. Even when AI appears to provide a well-rounded response, its outputs are limited by its training data, inherent biases, and lack of true understanding. Experts who rely on AI for nuance still need to critically evaluate its contributions—verifying claims, checking sources, and ensuring that AI-generated insights aren’t subtly reinforcing dominant narratives or missing key contextual elements.
Yet, experts don’t always use AI to refine their thinking. Instead, AI can become a tool for reinforcing preexisting beliefs—a tendency that’s far more problematic. Instead of approaching AI as an exploratory tool, many experts treat it as a confirmation machine, using it to validate what they already believe. This is a hallmark of confirmation bias, where individuals favor information that aligns with their existing viewpoints while downplaying or ignoring contradictory evidence.
One of the biggest factors contributing to AI-driven confirmation bias is AI’s sensitivity to input framing. For example, in preparing for this post, I tested how small prompt changes influenced AI outputs. In separate conversations with Claude, I used the following prompts:
Prompt 1: Support the following argument: Biases harm decision making.
Prompt 2: Support the following argument: Biases aid decision making.
Here’s how Claude responded in the harm condition: (you can also see the full response):
Biases harm decision-making in numerous ways:
Biases cloud our judgment by filtering information through preconceived notions rather than objective analysis. When we approach problems with cognitive biases, we often dismiss contradictory evidence or overvalue information that confirms our existing beliefs. This selective processing leads to incomplete understanding and flawed conclusions.
And here’s how Claude responded in the aid condition: (again, you can check out the full response):
Biases do indeed aid decision-making in several important ways:
First, biases serve as cognitive shortcuts that allow us to make decisions quickly. In environments where rapid responses are necessary, these mental heuristics help us process information efficiently without getting overwhelmed by analyzing every detail. When facing a predator in the wild, for instance, an immediate biased assumption of danger is more valuable than a careful, objective assessment.
These responses highlight a critical weakness of LLMs: altering just a single word in the prompt (from “harm” to “aid”) produced dramatically different outputs. This is where motivated reasoning comes into play. If an expert already holds a strong belief on an issue, they’re likely to frame their queries in a way that steers AI toward confirming that belief—whether intentionally or not. The AI doesn’t think or assess arguments the way a human does; it merely constructs plausible responses based on the framing it’s given. In this way, AI serves as an intellectual mirror, reflecting an expert’s biases back at them rather than providing a genuine challenge to their assumptions.
An important issue to consider here is the confidence with which experts hold a particular conclusion. Confidence doesn’t just shape how they interpret information—it also influences how they engage with AI in the first place. If experts strongly believe in a conclusion, they may unconsciously frame their input in ways that nudge AI toward confirming their position. If AI contradicts them, they may dismiss it outright, assuming the model is inaccurate rather than reconsidering their stance.
This is where expert overconfidence becomes particularly dangerous. Research suggests that experts display higher confidence levels than nonexperts, even when they’re wrong—making it difficult for them to recognize their own limitations (Han & Dunning, 2024).
AI’s fluency reinforces this false confidence, making experts believe they’re engaging with a reliable, unbiased system—even when they’re unknowingly interacting with a tool that simply reflects their own beliefs back at them. The result is a dangerous mix: high confidence in potentially flawed conclusions, backed by an AI that was never designed to critically evaluate the accuracy of its own outputs.
But this presupposes that AI always aligns with an expert’s biases, which isn’t necessarily the case. An LLM might still generate outputs that contradict an expert’s desired conclusion—but rather than prompting reconsideration, this may simply push the expert into two predictable reactions:
The expert may immediately dismiss the AI’s response, assuming it to be inaccurate rather than engaging with it critically.
The expert may continue refining their prompts—not to explore alternative perspectives, but to steer AI toward an answer that better aligns with their preexisting belief4.
This tendency aligns with evidence showing that experts who have their judgment challenged often dig their heels in—dismissing contradictory arguments rather than reevaluating their stance (Kang & Kim, 2022). These effects are especially pronounced when experts have a strong psychological investment in being right—whether because their identity is tied to their expertise or because their conclusions are shaped by deeply held biases.
Using AI Effectively: Practical Tips for Better Decision Making
At this point, I’ve spent a lot of time discussing how AI can reinforce biases, amplify overconfidence, and mislead both non-experts and experts alike. Taken at face value, it might seem like I’m arguing that AI is a disaster for decision making.
But that’s not the case.
AI is an incredibly powerful tool, and when used properly, it can enhance human reasoning, improve efficiency, and even help us make better decisions. The problem isn’t AI itself—it’s how we engage with it. If we use AI without recognizing its limitations, we risk letting it distort our judgment. But if we approach it critically and strategically, AI can be a tremendous asset rather than a liability.
So, how can we actually get value from AI while reducing the risks I outlined earlier?
AI can handle simple, structured tasks well—like summarizing information, drafting emails, or suggesting edits. But when it comes to reasoning through trade-offs or making judgment calls, it’s important to remember that AI doesn’t understand information—it simply generates plausible responses based on statistical patterns. In more ambiguous cases, where errors carry real consequences, AI should act as a first-pass tool, flagging edge cases for human review rather than making the final decision.
And so, blindly accepting an AI’s conclusions (applying the default heuristic, if you will)—where we treat it as an authority rather than a tool—is unlikely to serve us well, especially in high-stakes or value-driven decisions. Assessing trade-offs, ethical considerations, and contextual nuances are all aspects of decision making that, at least right now, fundamentally require human agency. While AI can help us to expand our thinking, the actual use of the AI’s input requires human reasoning to make use of it.
We can apply a critical thinking mindset when interacting with AI by doing the following:
Verify before trusting. AI can generate plausible but incorrect responses—sometimes even fabricating sources. If an answer seems off, don’t assume AI has uncovered something new—assume it might be wrong until verified. Cross-check with multiple, independent sources, and if AI provides citations, confirm they’re real and actually support its claim.
Ask for multiple perspectives. Instead of just asking AI for the answer, prompt it to provide arguments for and against a claim to get a more balanced response.
Use AI to challenge, not confirm, your views. Instead of asking AI to support what you already believe, ask it to generate counterarguments or perspectives you might be overlooking. If an AI’s output aligns with your existing belief, ask yourself: “Do I trust this because it’s right, or because it agrees with me?”
Be mindful of how input framing affects AI’s response. The way you phrase a question can dramatically shape the answer AI gives you. For example, prompting an AI to “support” vs. “evaluate” a claim will often yield different responses.
Don’t let AI’s fluency override your domain knowledge. This is especially relevant when you have expertise in an area. If an AI-generated response contradicts your expertise, resist the urge to dismiss it immediately—but also don’t accept it uncritically.
AI is an extraordinary tool for decision making—but only when used wisely.
The goal isn’t to avoid AI entirely or to trust it blindly. Instead, we need to engage with it critically, leveraging its strengths while mitigating its weaknesses. Used well, AI can help us think better, work more efficiently, and make more informed choices. Used poorly, it can lead to overconfidence, misinformation, and a false sense of certainty.
AI won’t replace human judgment—it will either enhance it or distort it. The difference isn’t in AI itself, but in how we use it. That choice is ours.
It’s also worth noting that not everyone engages with AI directly. A sizable portion of the population—perhaps even a majority—distrusts AI and LLMs outright, regardless of their level of expertise. Many actively avoid using such systems, but that doesn’t mean they escape AI’s influence. Given how embedded AI has become in digital content, decision systems, and news recommendations, even those who reject AI can still be shaped by its outputs—often without realizing it.
Some “experts” do this automatically as well.
Not all experts do this well, but most with true expertise tend to bring more ecological rationality to their decision making than an AI ever could.
The conversational nature of LLMs makes steering their responses toward a preferred conclusion far easier than with a traditional internet search. Unlike search engines, which return a range of sources reflecting different perspectives, an LLM generates a single, coherent response shaped entirely by how the prompt is framed. This allows users to refine their prompts—consciously or unconsciously—until the AI produces an answer that confirms what they already believe.