3 Comments
User's avatar
Ash Stuart's avatar

It was very enjoyable to read this clear distinction and delineation between risk and certainty, something I thought I had an intuitive sense of, but accept there's value in laying it down explicitly as Matt does in this article.

However, my joy was later tempered by the thought that if only our policy-makers applied these principles explicitly into decision-making since they affect many people in society!

But then I had to tell myself, our politicians are us, so it's better start by educating ourselves -- the public, the electorate, to understand and expect such an approach to be taken by our politicians. To that end, the ideas in this article deserve wide dissemination.

Expand full comment
Matt Grawitch's avatar

Thanks, Ash. When I originally wrote the two pieces for Psychology Today, both were written for the benefit of my students (they requested the risk vs. uncertainty one, the other was as a result of some misunderstanding about "unintended consequences"). In neither case, do these concepts appear to be obvious (at least in terms of their implications) for decision makers. And I would agree, policy makers would be well served to be more deliberate about considering them (in terms of unintended consequences) and differentiating them (in terms of risk vs. uncertainty).

Expand full comment
Ruv Draba's avatar

Thank you for this post, Matt. From a management perspective, risk and uncertainty are everywhere -- especially when you introduce change or are adapting to change. I think industries are getting better at managing it, but it can be subtle.

Your example with the self-driving cars was well-made. We can't know how risk will operate until we know how these vehicles will be used. But we have another example where new technology is already in use and the changing risk is not yet well perceived. For interest, here's a case-study.

In the US, the Intracoastal Waterway (ICW) is a 3,000mi inland waterway along the Atlantic and Gulf coasts joining up inlets, saltwater rivers and canals to create an inland marine highway from Massachusetts via Florida to Texas. This waterway can be heavily trafficked by commercial and recreational craft; it needs to be periodically dredged and the deeper channels can shift over time which can be daunting for new captains of recreational boats, whose chart-reading and piloting skills may be weak, and who fear running aground.

For two decades it has been convenient to chart electronically not just ICW conditions but also a potential route along it, especially for nervous skippers. These 'tracks' can be used either manually or with autopilots to navigate and help skippers avoid running aground, so they function as a safety measure for captains with the weakest skills. There are both community-generated and commercial versions of such tracks.

But used collectively by skippers with a growing number of autopilots, such tracks also focus traffic in both directions along just one line. Being automatically calculated for ease of update, a track can also cut corners, directing traffic into oncoming lanes contrary to marine rules. So although when viewed individually it's a safety measure, when viewed from a systems perspective it could be a source of hidden and growing risk as more autopilots are used by captains who don't know better.

To rusted-on users, this looks paradoxical. How can a well-maintained and well-understood system that was safe to use in 2015, suddenly become less safe in 2025? But in fact, if you wanted to maximise the number of close encounters with weak skippers using popular navigation methods in congested waters, it would be hard to find more of them today than along these tracks. Yet even when the growing risk is pointed out, there's pushback: it's not dangerous until you *prove* it's dangerous. First show me the accident and *then* I'll believe the risk.

That's not really how we want to manage risk.

Expand full comment