Discussion about this post

User's avatar
Ruv Draba's avatar

You made a good point about heuristic benefits, Matt, but per your title: aren't cognitive biases just adaptive system effects -- a consequence of finite systems adapting imperfectly to complex and changing environments?

Don't we see similar in (say) evolutionary systems where some traits are beneficial, while some are dominant yet serve no conceivable function? They exist because they're easy to replicate yet haven't been selected out yet. (Perhaps some are selected for simply because they're familiar.)

To an information engineer like myself, the interesting question isn't whether adaptive systems have biases (which ones don't?) but how these can be self-detected, evaluated and managed according to current and anticipated challenges. I expect that this will become more pressing over time as Machine Learning is integrated into business decisions.

Business continuity demands consistency, while efficiency demands accuracy. Change the training or architecture too much between ML generations and you get a more accurate system that nobody can now predict. For business risk, that's juggling kittens and chainsaws. (And heaven help the public sector agency that tries to use it for policy implementation when the ML is generationally 'upgraded'.)

And maybe there's an emerging cultural context too. The humanities have seized on 'cognitive bias' and used it for competitive gaslighting, so that it has become a favoured pejorative in some circles: ideological rock-chucking being preferred to rigorous testing under acceptable tolerances.

That doesn't much help with trust or reason, I expect.

Expand full comment
Ash Stuart's avatar

How Anthropic, the makers of the Gen AI model Claude, addressed the question of LLM 'bias'. Something that might be interesting to discuss in this context.

https://www.anthropic.com/research/claude-character

Expand full comment
4 more comments...

No posts