3 Comments
User's avatar
Ash Stuart's avatar

There's much debate these days about LLM 'explainability', the expectation that Generative AI models should be able to perfectly explain every bit of their actions -- when we humans cannot accurately explain our own behavior.

Expand full comment
Matt Grawitch's avatar

AS I started writing this post - and I discovered some of the research on how we tend to bullshit explanations for decisions/conclusions we did not consciously script out, I realized we have a lot more in common with some of these LLMs than we necessarily recognize. Consider that they aren't conscious, so they have no actual conscious reasoning capacity. And thus, everything is essentially intuitively derived (if we apply the term we use for nearly instantaneous responding).

Expand full comment
Ash Stuart's avatar

I agree Matt. I’ve made a similar point once or twice in my AI Conc articles - we tend to hold AI to rather high standards where we ourselves fall short in so many ways. As you say, it’s quite amazing what these LLMs can come up with despite their caged-hen existence.

Expand full comment