6 Comments
User's avatar
Ruv Draba's avatar

You made a good point about heuristic benefits, Matt, but per your title: aren't cognitive biases just adaptive system effects -- a consequence of finite systems adapting imperfectly to complex and changing environments?

Don't we see similar in (say) evolutionary systems where some traits are beneficial, while some are dominant yet serve no conceivable function? They exist because they're easy to replicate yet haven't been selected out yet. (Perhaps some are selected for simply because they're familiar.)

To an information engineer like myself, the interesting question isn't whether adaptive systems have biases (which ones don't?) but how these can be self-detected, evaluated and managed according to current and anticipated challenges. I expect that this will become more pressing over time as Machine Learning is integrated into business decisions.

Business continuity demands consistency, while efficiency demands accuracy. Change the training or architecture too much between ML generations and you get a more accurate system that nobody can now predict. For business risk, that's juggling kittens and chainsaws. (And heaven help the public sector agency that tries to use it for policy implementation when the ML is generationally 'upgraded'.)

And maybe there's an emerging cultural context too. The humanities have seized on 'cognitive bias' and used it for competitive gaslighting, so that it has become a favoured pejorative in some circles: ideological rock-chucking being preferred to rigorous testing under acceptable tolerances.

That doesn't much help with trust or reason, I expect.

Expand full comment
Matt Grawitch's avatar

The points you raised are all valid. The point I was trying to make in my post was that biases represent a leaning or tendency. But there's little evidence to suggest that these biases necessarily result in error, nor that they do so in any sort of systematic sense. A lot of the research supporting the claim that cognitive biases are "systematic errors" comes from studies that are essentially seeking to identify places where biases may lead to erroneous decisions (though most of those are based on laboratory tasks that pose no real significance to the real world).

The problem is that what gets published essentially amounts to studies purporting to show such biases by researchers who are attempting to design studies to do so. This essentially leads to a sampling error. If all we really have is documentation of times when such biases might adversely affect people's decision making but then largely ignore all the times they benefit said decision making, then, well, we end up with a biased attitude toward biases.

Perhaps a bigger issue at play here is that, with the dawn of AI, it is possible for machines to analyzes individual tendencies among people and seek to exploit those. So, while biases may not represent "systematic errors" the way dual-process theory positions them (as a between-persons phenomenon), we all possess biases that likely affect our own within-person decision making. And that can be exploited more easily today than ever before.

Expand full comment
Ruv Draba's avatar

> This essentially leads to a sampling error.

Exactly, Matt. Biases are artefacts of adaptive learning -- they're almost the point of adaptive learning in that every weighting is a constructed bias. This is a how complex and changing data-space is condensed into a concise decision-system. We couldn't do that if it contained all the detail.

Biases produce both efficiency and inaccuracy. It's a trade-off that we're glad to make because decisions have to be taken in limited time-frames, and among humans you also need consensus and meticulous accountability.

It's fine to look at error-rates as one success measure, but from an engineering perspective there's also learning effort, decision time, volatility (where a small change in input produces a big change in the decision), traceability and the effort it takes to improve. That applies equally to human decision systems or machine-learning systems. Anyone working in knowledge-intensive applications should be aware of this -- and here we can think hospitals and primary care, insurance, traffic control systems, regulators, crime investigation, judicial systems and so on.

They all feature decision-biases for better and for worse and the need for consensus, transparency, accountability and incremental improvements rather than 'big-bang' changes all matter. So often they'll tolerate systematic error simply because more accuracy comes with unacceptable costs elsewhere.

> it is possible for machines to analyzes individual tendencies among people and seek to exploit those

I'd say inevitable rather than possible, since marketing already did this for market segments without the use of machine learning, while attention algorithms are already doing it individually with stats. ML is just another technology to exploit what's already being exploited.

From an information engineering perspective, none of this is controversial. But psychology inherited a legacy of rationalism where biases look irrational and therefore dysfunctional. There's also a tendency to worry more about isolated, individual decision than long-term collective function, despite the fact that humans seldom make decisions in isolation from their peers (every social species leans hard on what their fellows know, and how they behave.)

You're right that the discipline needs to catch up, and I've argued that our culture does too. I'm cheering for you here, but some of what you're saying only seems novel because it's a psychologist saying it.

Expand full comment
Ash Stuart's avatar

How Anthropic, the makers of the Gen AI model Claude, addressed the question of LLM 'bias'. Something that might be interesting to discuss in this context.

https://www.anthropic.com/research/claude-character

Expand full comment
Matt Grawitch's avatar

I appreciate the fact that they acknowledge that they "want them to know they’re interacting with an imperfect entity with its own biases and with a disposition towards some opinions more than others. Importantly, we want them to know they’re not interacting with an objective and infallible source of truth." I actually have a series of posts on AI-related issues that will be coming out after Part 3 of my Evidence-Based series. Some of this is touched on, but not to the extent I may touch on it down the road a bit.

Expand full comment
meika loofs samorzewski's avatar

neither and both

Biases are individually con-foundational, 'something' has to compose the body from the terrain, and so are each always always partially selected and thus adaptive,

In humans who are more socially learning based there is less selection for dampening the errors (chimps do better at some more rote rational gaming set ups than humans) so we allocate more resources to learning with others as well as outsourcing problems solving into passed-on traditions (recent anthropology is heading into this area at great speed).

It is also possible that due to this social outsourcing that more and more of our efforts at this work in self-domestication result in less direct selection at the individual level as we cope with socially created and maintained across the genetic bottleneck of the individual (so the nuance you are looking for is not to be found in the silo of psychology).

an example of self-domestication is the 'bias' people to have in bad teeth, and indeed weak musculature on the jaws, because we cook our food, and grow soft vegetables. Animals who have bad teeth do not survive, not even domestic animals.

when society becomes the terrain we move over to compose our selves we will survive by being biased to being social, all individualism is thus dependent on that success. Individual psychology is and is not the source.

By the same token being active in the social arena being aware of differences between our movements and self-compositions we will negotiate a way through these biases, and again, socially solve this issues. Science being the biggest best example, and resentful demagoguery being a parasite.

Expand full comment