You Can’t Always Think Your Way to the Right Answer
Why Intuition Works When It’s Calibrated
A previous version of this post was published on Psychology Today.
You can also hear AI Matt’s summary of the piece below.
In Monty Python’s Philosopher’s Football Match, dozens of history’s greatest minds—well, from Germany and Greece anyway—pace the field in silence. The match begins, but no one moves. Plato contemplates. Nietzsche frowns. Heidegger circles back. And then, after several agonizing minutes of doing absolutely nothing, Archimedes suddenly shouts “Eureka!” and kicks the ball. And only as time begins to run out does anything actually happen.
It’s a ridiculous scene—but it captures something surprisingly real about decision making. Sometimes, no matter how much conscious thought we put in, we get stuck. We hesitate. We spin in place. And then, without warning, a solution appears—fully formed and ready to act on.
That’s the story we like to tell ourselves, anyway. A flash of insight in the shower. A melody in a dream. A gut feeling that turns out to be right. These moments feel familiar—almost intuitive, if you’ll pardon the pun. And they fuel a popular belief: that when it comes to tough decisions, the unconscious mind might just know best.
But Newell and Shanks (2023), in an excerpt from their book, Open Minded: Searching for the Truth about the Unconscious Mind, argued that the science doesn’t support these romantic notions. Their central message is as clear as it is contrarian: “There is no free lunch when it comes to tricky decisions; you have to do the thinking.”1
I’ll admit: they made a strong case. They walked through the evidence methodically—highlighting shaky findings, debunking eureka stories, and questioning whether unconscious thought actually improves decisions. But as I read through their argument, I kept coming back to a question they never quite answer: What exactly do we mean by “unconscious”?
Because while Newell and Shanks are right to critique the hype, their attempt to cleanly separate intuition from unconscious processing might go too far. Not because intuition is magic—but because the processes that drive it often operate outside of our awareness, even when they don’t involve some hidden “ghost in the machine.”
That’s where their argument runs into trouble, because that blurred boundary between conscious and unconscious is exactly what makes intuition so powerful— and so misunderstood.
Can We Offload Thinking to the Unconscious?
At the heart of Newell and Shanks’ argument is a clear and provocative claim: we can’t offload conscious decision making to the unconscious mind. They take aim at popular notions that suggest otherwise—especially the idea that we might get better outcomes by not thinking, or by letting our unconscious “work on” a difficult problem while we do something else.
This view shows up in two familiar forms: the “go with your gut” argument, and the “sleep on it” argument. The former, made famous by Gladwell’s (2005) Blink: The Power of Thinking Without Thinking2, argues that quick, intuitive reactions can outperform slow, deliberative reasoning. The latter proposes that stepping away from a problem—letting the unconscious mind churn in the background—can lead to better solutions than continued effort.
Newell and Shanks reject both of these arguments. Their review of the evidence highlights inconsistent findings across studies: some support unconscious processing, some show no benefit, and some clearly favor conscious thought. And when it comes to the romanticized stories—the shower epiphanies and dream-born solutions—they argue that these anecdotes ignore the base rate. We remember the times a good idea surfaced unexpectedly. We forget the hundreds of times it didn’t.
And even when insight does strike, it’s rarely fully formed. The melody for “Yesterday” may have come to McCartney in a dream, but the song wasn’t ready for the studio—it needed eighteen months of conscious work to become a classic. Kekulé’s dream of the benzene ring? Striking imagery, sure, but not a substitute for testing and elaboration. These stories may reflect an unconscious nudge, but they don’t replace the effortful thinking that follows.
This leads them to their broader point: there’s no “ghost in the machine” quietly solving our problems for us. If tricky decisions require careful analysis, then we’re going to have to do that analysis—ourselves, deliberately. There’s no shortcut.
It’s a well-argued critique of the myth that thinking less will lead to better decisions. But in the process, Newell and Shanks risk overcorrecting. Because in their eagerness to dispel the idea of unconscious cognition as a full-on decision-making engine, they draw a stark line between conscious and unconscious processes—one that doesn’t leave much room for how decision making actually works in practice.
They frame it as either/or: either we’re making a decision consciously, or we’re not making it at all. And so, like many others, they fall into the false dichotomy that stems from dual-process theory. But real-world judgment doesn’t usually follow that clean divide. Especially when it comes to intuition.
Which brings us to the second major thread of their argument: the claim that intuition isn’t unconscious but simply fast recognition.
Is Intuition Just Fast Recognition?
The second major thread in Newell and Shanks’ argument is their attempt to separate intuition from unconscious processing. Intuition, they argue, isn’t some mysterious gift of the unconscious—it’s just fast, experience-based recognition. As they put it, quoting Herbert Simon (1992), “intuition is ‘nothing more and nothing less than recognition’” (p. 40).
And at first glance, that seems reasonable. A firefighter senses that something is wrong before the floor collapses. A seasoned nurse notices a subtle change in a patient’s breathing. An experienced manager makes a snap call that turns out to be right. These examples are often labeled intuition, but they’re better understood as pattern recognition. The situation cues a response because it resembles something we’ve seen before.
But here’s where Newell and Shanks’ argument runs into problems. They acknowledge that intuitive decisions often feel immediate, and they don’t deny that familiarity plays a role. What they argue, though, is that there are no hidden steps between the cue and the response. Intuition, in their view, isn’t driven by unconscious processing—it’s a direct line from familiarity to action. If the situation looks familiar, we respond accordingly. No need to invoke processes happening outside awareness—because, they claim, there are no processes in the first place.
But that’s where the explanation starts to feel too tidy. Just because a response feels immediate doesn’t mean nothing happened to produce it. Recognition doesn’t happen in a vacuum—it relies on memory, retrieval, and comparison to stored patterns. Even if those steps aren’t consciously experienced, they’re still part of the cognitive machinery.
And this is where Simon (1992), whose work Newell and Shanks invoke, offers a useful corrective. While they quote his definition of intuition as recognition, they omit the deeper point he made: “We are aware of the fact of recognition… we are not aware of the processes that accomplish recognition” (p. 155, emphasis in original). In other words, the output may be conscious—we recognize that something feels familiar—but the pattern-matching process that gets us there is not.
In fact, Simon was arguing that intuition is indeed an unconscious process—“the situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer” (p. 155). He further emphasized that “we do not have conscious access to the processes that allow us to recognize a familiar object or person” (p. 155), meaning that even the encoding of the information that allows recognition to occur operates outside conscious awareness (making it unconscious).
And that’s the crux of the problem.
Recognition doesn’t just materialize out of nowhere. Something in the current context triggers memory retrieval, matches patterns, and produces a feeling or response. That’s a cognitive process. And for most of us, most of the time, that process happens beneath the surface—outside our conscious awareness.
This doesn’t make intuition irrational or magical. It simply means that intuitive recognition draws on mechanisms we don’t have introspective access to. Just because something feels easy doesn’t mean it’s simple. And just because we can identify the output—a gut feeling, a sudden insight—doesn’t mean we were aware of the steps that got us there (Nisbett & Wilson, 1977).
Newell and Shanks want to avoid what they call “magical” unconscious explanations. Fair enough. But in trying to avoid magical thinking, they end up drawing too clean a line—pretending that only deliberate, step-by-step reasoning counts as cognition, and that any process we can't verbalize doesn't count.
But intuition doesn't fit neatly into that binary. It’s not either conscious or unconscious. It’s both. The cue that triggers recognition may rise to awareness. The matching process probably doesn’t. And the output—the intuitive response—might arrive before we have any idea where it came from.
In that way, intuition isn’t some ghostly alternative to conscious thought. It’s a product of experience, processed quickly and often silently, grounded in familiarity but delivered without explanation.
That doesn’t make it infallible. It just makes it fast, efficient, and deeply dependent on prior experience. And all of that sounds a lot like unconscious processing to me.
When Intuition Works
To be clear, none of this means we should blindly trust our gut. Intuition isn’t infallible. But it’s also not useless. The real question isn’t whether intuition is conscious or unconscious—it’s whether it’s calibrated to the situation (Grawitch et al., 2025).
And that’s where expertise comes in.
Research on expert judgment, particularly in naturalistic decision-making contexts, suggests that intuition works best when the decision maker has relevant experience. Gary Klein (2015) argues that intuition emerges from an extensive repertoire of patterns built through direct and vicarious experience. Firefighters, nurses, and military leaders, for example, often make rapid, high-stakes decisions without comparing options, because they’ve learned to recognize familiar cues and respond accordingly. They might not be able to articulate the reasoning behind their decisions, but the response is grounded in well-developed tacit knowledge—mental models, perceptual skills, and situation-specific insights accumulated over time. In these cases, intuition isn’t mysterious—it’s efficient, experience-driven, and deeply dependent on context.
This kind of skilled intuition develops through repeated exposure, structured feedback, and learning in familiar environments. In these situations, intuitive responses can be both fast and accurate, even if the person can’t articulate the underlying reasoning.
But when the domain is unfamiliar or the feedback is noisy, intuition becomes less reliable. Our pattern-matching system can still kick in, but it may rely on misleading cues or irrelevant experience. That’s when deliberate reasoning—or at least slowing down to reassess—becomes more important.
This is where Newell and Shanks are right to raise the alarm. When people assume that defaulting to intuition leads to better decisions—or that the unconscious mind is quietly solving problems while we sleep—they’re likely to overgeneralize from a few successful outcomes and ignore the times intuition gets it wrong.
Still, just because intuitive judgments can fail doesn’t mean we should dismiss them altogether. The more closely our experience aligns with the decision context, the more likely it is that our intuitions will help rather than hurt. And those intuitions, whether we call them unconscious or not, are still built on cognitive processes that operate quietly beneath awareness.
When Stepping Away Helps—And Why That Might Be Unconscious After All
Let’s circle back to the idea that sometimes it helps to “sleep on it” or step away from a tough decision. Newell and Shanks are skeptical of this advice—and understandably so. Unconscious Thought Theory (UTT)—which postulates that your brain is doing deep, structured analysis without you—hasn’t always held up under scrutiny. Replication attempts have yielded mixed results, and the most generous reading is that unconscious thought performs about as well as conscious deliberation in many domains.
But that, as Bargh (2011) pointed out, is precisely what makes unconscious processing worth taking seriously.
The problem isn’t that unconscious thought fails to outperform conscious reasoning. The problem is that critics often expect it to behave like conscious reasoning in the first place. Many critiques of UTT rely on what Bargh calls a “straw man” unconscious—passive, unsophisticated, and disconnected from meaningful cognitive work. But when we broaden our understanding to include goal-dependent automaticity, pattern recognition, and evolved judgment systems, the picture changes.
There’s growing evidence that certain forms of evaluation—like fairness assessments, moral judgments, and even cheater detection—may continue operating after we’ve shifted our attention elsewhere. These aren’t random associations bubbling up from the depths; they’re structured, goal-aligned processes that operate outside awareness. And they’re especially active when the decision context taps into familiar cues or domains we’ve developed some expertise in.
So yes, taking a break can help. But not because your unconscious mind is running regression models while you sleep. It helps because cognitive work doesn’t stop the moment attention shifts. If your brain has already been exposed to meaningful information—and especially if you have a frame of reference for what matters—unconscious processing can continue organizing, weighing, and evaluating possibilities in the background.
That doesn’t make it magic. It makes it efficient. And in situations where mental energy is limited or conscious analysis would lead to overthinking, it may actually be the more ecologically rational option.
The real mistake isn’t in believing unconscious processing exists—it’s in assuming it works like conscious reasoning, or that it applies equally well in every context. In other words, when you step away from a problem, your brain may not be “solving” it in any deliberate sense—but it may still be working. Not by accident, and not magically, but because goal-relevant processing systems remain active even when your attention is elsewhere.
That’s especially true when the decision involves cues or structures you’ve encountered many times before. And so, from all this, my concluding takeaway points would be:
We’re probably not going to solve complex problems intuitively unless we have enough relevant expertise and experience to rely on—even then, some conscious processing is likely involved.
Trying to overthink an expertise-based intuitive response can lead to worse decision outcomes—we can think ourselves out of a good decision.
There are times when continuing to think about a decision or problem can be disadvantageous—those are the times when taking a break can facilitate the process.
In Monty Python’s Philosopher’s Football Match, it wasn’t a carefully reasoned plan or structured analysis that finally got the ball moving. It was Archimedes, breaking free from overthinking, shouting “Eureka!” and taking action. He didn’t need to explain it—he just recognized the moment and acted.
And while real-world decisions are rarely as theatrical, the lesson holds: thinking is essential, but so is knowing when to stop thinking. Whether insight comes from deliberate reasoning or unconscious processing, the goal is the same—to stop pacing the field and kick the damn ball.
Lest you think I engaged some deep massive brain power to reach this conclusion, rest assured I did not. It was the subtitle teaser text leading into the article itself.


From my own studies on this and related topics (with the disclaimer that I'm not professionally qualified to comment on it), I agree with your take on this, Matt. The subconscious (or unconscious) has, in my opinion, a very big role to play in decision-making (or our behavior in general). People in the past attributing it to dreams, omens, God, whatever were doing it I suppose as part of their social culturing, but said authors seem to take it to the other extreme in hypercorrection. I guess part of the problem is we can't pry open the human brain during such processes and put our finger on it, literally. But even just empirically, it should be clear that there is a fair amount of processing that's taking place outwith our conscious perception, and that does impact our final decision-making, which is indeed for the conscious brain to take control of and finalize. And this is not unlike Kahneman's System One thinking (the firefighter in book is pleased to find a mention here too 🙂).
Anyways, this might just be my intuition, I'll sleep over this to find out ;-)
I am reminded of John Boyd's OODA loop and the interplay between observing, deciding, and acting. The pace of this process was dependent on the more nebulous second "O" (orient), which involved factors that helped put a situation into context (including past experience). It is similar to what you were saying about Archimedes' "eureka" moment.