Kill Criteria, Confidence Thresholds, and the Case for Teaching Decision Making
Why structure helps—but only if it makes room for emotion, uncertainty, and change.
You can also hear AI Matt’s summary of the piece below.
For something as important as decision making, we spend remarkably little time teaching people how to do it. Logic, probability, and structured reasoning are all essential components of good decisions—so naturally, they’re mostly ignored in formal education. Outside of a few philosophy or statistics courses (if you’re lucky), you’re mostly on your own1.
That’s part of what makes Annie Duke’s Alliance for Decision Education such a useful initiative. It takes decision making seriously—and seeks to instill formal decision-making programming into K-12 education. Elements of the framework it relies on were outlined in a recent Big Think article, emphasizing things like estimating probabilities, setting decision deadlines, and defining “kill criteria” in advance to avoid getting stuck2.
That kind of structure can be helpful, especially in a world where people often conflate good outcomes with good decisions and persist on paths that no longer serve them. Duke’s emphasis on “kill criteria” and probabilistic thinking is a welcome counter to our natural tendency to double down when things go sideways—what behavioral economists call escalation of commitment.
But the same features that make the framework appealing also expose its limits. Real-world decisions often involve uncertainty that isn’t easily reducible to probabilities, trade-offs between values that can’t be ranked on a single scale, and emotional consequences that outlast the immediate outcome.
So while Duke’s framework is a valuable step in the right direction, we also need to recognize the ways it oversimplifies the messy business of human decision making. It’s not that we need less structure—it’s that we need frameworks that are flexible enough to account for complexity, emotion, and change.
Where the Framework Shines
There’s a lot to like in the structure Duke lays out. Most of us have experienced the paralysis of indecision or the slow drift of sticking with something long after it stopped making sense. The idea of setting “kill criteria” ahead of time—essentially pre-committing to walk away if certain benchmarks aren’t met—is useful. It helps protect us from ourselves, from our tendency to interpret sunk costs as investments rather than expenses we’ll never get back.
Perhaps most importantly, the framework pushes people to clarify their values upfront—to think deliberately about what outcomes matter most and which trade-offs they’re actually willing to make. Rather than treating decisions as purely technical or procedural, it draws a connection between what we do and what we care about. That’s a critical step, because it encourages people to make choices that aren’t just logically defensible, but personally meaningful.
That focus on values is complemented by the framework’s emphasis on thinking in probabilities rather than absolutes. That might sound minor, but it’s actually a pretty major shift in perspective. Instead of asking whether something will work, it encourages us to ask how likely it is to work—and what we’ll do if it doesn’t. In theory, that mindset helps people avoid overconfidence, manage expectations, and make more deliberate trade-offs.
Duke’s approach also pushes back against the status quo bias that keeps people stuck. We often treat whatever we’re doing now as the default, requiring a mountain of evidence to justify a switch. But we also have to remember that maintaining the default is itself a choice—and sometimes not a very good one3.
These tools—clarifying values, thinking probabilistically, planning for failure, and reevaluating commitments—are vital parts of the decision-making toolkit.
So what’s the problem then?
Where Things Get More Complicated
The appeal of a structured framework is that it offers a way out of indecision. By clarifying values, setting timelines, estimating probabilities, and identifying “kill criteria,” Duke’s framework pushes people to be more deliberate and less emotionally reactive. And in a world where sunk costs and status quo bias often keep people stuck, that’s no small feat.
But that same structure can also be misleading. It gives the impression that if you just follow the right steps, you can make even the hardest decisions feel clean and manageable. And that works for small-world decisions. But most decisions we make occur in large worlds, where the reality is often a lot messier.
Start with probabilities. Estimating the likelihood of success sounds like a reasonable strategy—until you try to do it in real life. Most people don’t have the domain expertise, feedback loops, or data needed to make meaningful probability judgments. And even if they did, many of the most important decisions we face—changing careers, ending relationships, starting over—happen in large, open-ended contexts where probabilities are guesses at best. What we’re really doing is trying to make sense of uncertainty.
And even if you could estimate those probabilities, there’s another layer of uncertainty we rarely account for: ourselves. As Dan Gilbert pointed out in Stumbling on Happiness, we’re remarkably bad at predicting what our future selves will want or how they’ll feel. We imagine success or failure based on our current preferences, current emotions, and current values—and assume those will stay stable. But they don’t. The person making a decision today isn’t always the one who has to live with it tomorrow. That makes the mental time travel required for good decision making fraught with blind spots, especially when the consequences of a decision stretch months or years into the future.
And that’s before we even get to the deeper challenge: we don’t consult values like items on a menu. Many decision frameworks assume we’re working with a stable, well-ordered set of priorities, just waiting to be applied. But in practice, our values are activated by context, emotion, and salience, as I wrote about here. What feels most important in one moment can receive little more than a passing thought in another. We don't always know which values are most relevant until we’re already in the thick of a decision—or looking back on it with hindsight. That fluidity doesn’t mean values are irrelevant; it means they’re not fixed data points we can plug into a formula.
That’s not a flaw in our reasoning—it’s how values work. The idea of “stable and well-ordered” values makes sense in theory, but in practice, it’s mostly a myth. Even when values do come into focus, they often point in different directions. A single decision can reflect genuine commitments to multiple priorities, each of which feels valid in its own right.
These tensions don’t always resolve cleanly. We might act in alignment with one value and later feel the emotional cost of having failed to consider another—sometimes only realizing what was at stake in hindsight. That’s not irrationality; it’s the human condition. Even the best framework can’t eliminate that potential. And trying to impose clarity on decisions that are inherently ambiguous can backfire. It creates the illusion of precision where none exists—or worse, makes people feel like their struggle to choose is a personal failure, rather than a reflection of the decision’s complexity.
So yes, structure helps—but only if we understand its limits. Decision making is as much about navigating uncertainty, managing competing values, and tolerating emotional discomfort as it is about setting deadlines or estimating odds. Duke’s framework points in the right direction, but progress also means embracing complexity, not trying to engineer it away.
That complexity is also why emotional responses aren’t a bug in the decision-making system—they’re part of how we process trade-offs. Emotions help signal what matters to us, what risks we’re willing to take, and what losses we’re trying to avoid. A purely “rational” decision that leaves someone emotionally wrecked isn’t a good decision. It’s just a decision that ignored part of the problem.
And ignoring those parts doesn’t make them go away. In fact, one of the most common failure points in real-world decisions isn’t a lack of logic or data—it’s the mismatch between what a decision looks like on paper and what it feels like in practice. That’s where things like regret, second-guessing, and backtracking come in. We don’t regret decisions just because they had bad outcomes4—we regret them when they violate some part of our identity or priorities we failed to fully account for or didn’t recognize were important.
None of this is to say that structure is useless or that frameworks like Duke’s aren’t helpful. They are—especially when they help us slow down, reflect, and challenge default assumptions. But if a framework treats decision making as a logic puzzle to be solved rather than a human process to be navigated, it kind of misses the point. Real decisions aren’t necessarily clean. They involve uncertainty, shifting values, and emotional stakes that don’t fit neatly into any formula. A good framework doesn’t eliminate the mess—it only prepares you to face it.
Rethinking What Makes a Decision “Good”
Many decision frameworks—Duke’s included—aim to reduce regret by improving the process. The idea is that if you set clear criteria, estimate the odds, and decide when to walk away, you’re less likely to get stuck or second-guess yourself later. And to a point, that’s true. A structured process can reduce reactivity, surface blind spots, and help avoid defaulting to inaction.
But that structure also relies heavily on one key assumption: that it’s possible to assess probabilities in meaningful ways. And that assumption breaks down in large-world decisions.
In small worlds—think chess moves or poker hands—you’ve got clear parameters, reliable feedback loops, and repeatable outcomes. You can look at historical data, assign base rates, and update as new evidence comes in. But most of life doesn’t work that way. The kinds of choices people agonize over—changing careers, ending relationships, deciding whether to start a business—don’t come with base rates. They involve open-ended contexts, uncertain futures, and outcomes shaped by external constraints and shifting goals.
In those cases, trying to pin down an exact probability doesn’t help—it just gives a false sense of certainty. We’re really just guessing—and those guesses are shaped more by emotion and bias than by careful risk analysis.
That doesn’t mean probability thinking is useless. But it does mean we often need to rethink how we use it. Instead of chasing numerical precision, we can apply conceptual probability standards—like those used in law and policy. Is the outcome more likely than not (preponderance of the evidence)? Is it highly likely (clear and convincing)? Or is it near certain (beyond a reasonable doubt)? These categories help us clarify how confident we need to be to act—without pretending we can assign exact probabilities where none exist. These kinds of rough thresholds align with how people often feel their way through uncertainty—and give those feelings a clearer structure to support decision making. They become heuristics for navigating uncertainty.
And they point to a broader shift in how we think about rationality. If you define rationality as mathematical optimization, then yeah—it's often useless in real-world decisions. As Smets (2024a; 2024b) argued in his two-part series, rationality is often treated as a unified ideal, but the term itself is conceptually problematic. Economists, psychologists, philosophers, and educators all use it differently—and often inconsistently.
But if you define rationality as being deliberate about which values matter most, which costs you're willing to accept, and what level of uncertainty you're willing to live with, then it becomes useful. So instead of asking “Is this the rational choice?”, it makes more sense to ask: What values are driving my decision? What risks feel acceptable? What constraints are shaping the options? And what level of confidence feels sufficient to act—and why?
From that angle, good decision making isn’t about rationality in some abstract sense. It’s about making context-aware, value-aligned, and psychologically sustainable choices—ones that acknowledge uncertainty, allow for course correction, and reflect the trade-offs that feel acceptable from your frame of reference.
Wrapping Things Up
Duke’s framework gets a lot right—especially when it comes to planning ahead, setting limits, and avoiding the traps that lead to indecision or regret. Where it runs into trouble isn’t the intent, but the assumptions: that good decisions usually involve quantifiable trade-offs, that probabilities are knowable, and that optimization is always (or even usually) possible.
But those are solvable problems, not fatal flaws. We don’t need to throw out structured decision frameworks—we just need to adapt them to the large, messy, uncertain choices that make up most of life. That means grounding them in values, clarifying confidence thresholds, and using conceptual standards as heuristics instead of chasing false precision.
And Duke’s most important contribution might be the one that’s gotten the least attention here: formal decision education belongs in K–12 every bit as much as reading, math, or science. Teaching kids how to think through uncertainty, weigh trade-offs, and reflect on what matters most isn’t a luxury skill—it’s foundational. Because no matter which path someone takes in life, they’re going to be making decisions. We might as well help them learn how.
I’ve regularly conducted informal polls with my graduate students in a Decision Making course I teach twice a year. None of them have ever had a formal decision making class, it’s rare to encounter any who have ever had a formal course on logic or argument, and only a few of them have ever been taught probability outside of advanced statistics courses.
Most, if not all of the elements discussed were things Duke has written about for adult audiences too. The Alliance is merely her attempt to translate many of those ideas into programming that is accessible for the K-12 audience.
I’ll have more to say on this soon.
Though that often does come into play.



At the risk of being too simplistic, I think when it comes to teaching critical thinking or decision making practices (I’d like to lump all this together under Discernment), the focus is best placed on awareness of the most common and distortive cognitive errors we make.
And I think any instruction on discernment should make clear that it will rarely be a clean and clinical process, so just avoid the common pitfalls and trust in your training like muscle memory. Emotions certainly play a part in that.
The framework you mention here sounds rather interesting, and I can somewhat see the arguments you're making - perhaps one has to start somewhere, and this is better than nothing in its context of application? Maybe this is also an opportunity for you in this publication to get into some of the more specifics of the framework to supplement / add clarity.
Also, I wonder, when did the teaching of philosophy (in the sense of logic, standard thinking) go out of vogue..