I was glad to read this article, Matt. It's a pithy summary that would be good to trot out when chatting to policy analysts, and I want to bookmark it.
But in determining what policy ends up implemented (and how), I think there are social factors that matter more than the psychology of policy-design.
Regardless of how well policy is designed (and you can do a great job of making it evidentiary and designing it for robust implementation), policy-selection is generally prioritised on sociopolitical factors. That's patently true in democratic legislature (notorious for hiding how the sausage is made), but you can equally find the same in business and community policy (think body corporates, social clubs and sporting groups.)
Outside academia and the idealised world of policy analysis I can think of very few examples where policy-selection is based purely on evidence of cost-efficacy, even where that evidence exists.
If I had to distill it, I'd say that while policy validity might have a lot to do with robust evidence, consensus-building seldom does.
Or to unpack mechanisms more, you could look at the political science pub, *The Dictator's Handbook: Why Bad Behaviour is almost always good politics* (2011) The 'bad behaviour' described centres on the deliberate implementation of bad policy. It explains how selection mechanisms work against policy-evidence but optimise cronyism, regardless of what decision-making system is used.
TL;DR: Psychological bias matters; 'small p' political objectives matter more.
I don't disagree with you at all, and if you read more of my posts, you'll see a connection between some of those ideas you mentioned and stuff I have written about in the past. The 3 posts that preceded this one gets into some of that (on ideological bias and in-group/out-group stuff), and the Head article I linked to in this post touches on some of those political issues. A lot depends on which values are predominantly driving the decision - often the 'small p' political objectives are a much stronger driver than is being true to what scientific evidence suggests. Not to mention the fact that scientific evidence is just one source of evidence (with others like experiential evidence and stakeholder evidence) being often equally if not more important in policy-making decisions (at least in terms of how it's prioritized).
That's interesting, Matt, and thank you for your reply. I look forward to reading more.
Ideology and in-group/out-group seem to me factors where getting them wrong costs consensus, policy reach, impact and longevity while getting them right adds nothing to keeping policy adaptive and effective.
There's also the murky term 'culture' which I think has a lot to do with making reach, impact and longevity work. While I've never seen a rigorous definition of organisational culture, I think its contribution contains at minimum:
1. Experience in managing similar challenges and trade-offs (e.g. policy objectives vs implementation impediments);
2. Lessons learned in avoiding similar failures;
3. Experience in applicable incentives and enforcement;
4. Experience in similar methods, technologies, stakeholders, context;
5. Established relevant roles, lines of communication and coordination;
6. Some existing, binding mythology that incorporates the venture; and consequently
7. Established Language and ontology which generally capture all the above but often overlook anything else.
Culture, they say, eats strategy for breakfast. The need for so much tacit knowledge, trust and capability does the same with policy. Even if you get it right on paper, this stuff can skew interpretation unrecognisably. There are many policies perfectly viable to write yet utterly impractical for an existing culture to accept and implement.
My thought: cultures skew toward accepting and implementing policies they're comfortable with, even when the reason isn't specifically ideology or in-groups.
I was glad to read this article, Matt. It's a pithy summary that would be good to trot out when chatting to policy analysts, and I want to bookmark it.
But in determining what policy ends up implemented (and how), I think there are social factors that matter more than the psychology of policy-design.
Regardless of how well policy is designed (and you can do a great job of making it evidentiary and designing it for robust implementation), policy-selection is generally prioritised on sociopolitical factors. That's patently true in democratic legislature (notorious for hiding how the sausage is made), but you can equally find the same in business and community policy (think body corporates, social clubs and sporting groups.)
Outside academia and the idealised world of policy analysis I can think of very few examples where policy-selection is based purely on evidence of cost-efficacy, even where that evidence exists.
If I had to distill it, I'd say that while policy validity might have a lot to do with robust evidence, consensus-building seldom does.
Or to unpack mechanisms more, you could look at the political science pub, *The Dictator's Handbook: Why Bad Behaviour is almost always good politics* (2011) The 'bad behaviour' described centres on the deliberate implementation of bad policy. It explains how selection mechanisms work against policy-evidence but optimise cronyism, regardless of what decision-making system is used.
TL;DR: Psychological bias matters; 'small p' political objectives matter more.
I don't disagree with you at all, and if you read more of my posts, you'll see a connection between some of those ideas you mentioned and stuff I have written about in the past. The 3 posts that preceded this one gets into some of that (on ideological bias and in-group/out-group stuff), and the Head article I linked to in this post touches on some of those political issues. A lot depends on which values are predominantly driving the decision - often the 'small p' political objectives are a much stronger driver than is being true to what scientific evidence suggests. Not to mention the fact that scientific evidence is just one source of evidence (with others like experiential evidence and stakeholder evidence) being often equally if not more important in policy-making decisions (at least in terms of how it's prioritized).
That's interesting, Matt, and thank you for your reply. I look forward to reading more.
Ideology and in-group/out-group seem to me factors where getting them wrong costs consensus, policy reach, impact and longevity while getting them right adds nothing to keeping policy adaptive and effective.
There's also the murky term 'culture' which I think has a lot to do with making reach, impact and longevity work. While I've never seen a rigorous definition of organisational culture, I think its contribution contains at minimum:
1. Experience in managing similar challenges and trade-offs (e.g. policy objectives vs implementation impediments);
2. Lessons learned in avoiding similar failures;
3. Experience in applicable incentives and enforcement;
4. Experience in similar methods, technologies, stakeholders, context;
5. Established relevant roles, lines of communication and coordination;
6. Some existing, binding mythology that incorporates the venture; and consequently
7. Established Language and ontology which generally capture all the above but often overlook anything else.
Culture, they say, eats strategy for breakfast. The need for so much tacit knowledge, trust and capability does the same with policy. Even if you get it right on paper, this stuff can skew interpretation unrecognisably. There are many policies perfectly viable to write yet utterly impractical for an existing culture to accept and implement.
My thought: cultures skew toward accepting and implementing policies they're comfortable with, even when the reason isn't specifically ideology or in-groups.