If we are to intervene in what would otherwise happen, we need an idea of not only the benefits of our interventions but also the harm. Otherwise how will we know when, despite our best intentions, we cause more harm than we do good?
Intervening when we have no idea of the break-even point is "naive interventionism," a phrase first brought to my attention by Nassim Taleb.
In Antifragile, he writes:
In the case of tonsillectomies, the harm to the children undergoing unnecessary treatment is coupled with the trumpeted gain for some others. The name for such net loss, the (usually bitten or delayed) damage from treatment in excess of the benefits, is iatrogenics, literally, "caused by the healer," iatros being a healer in Greek. [Antifragile]
Why would people do something even when the evidence points out that doing something is actually causing more harm?
I can think of a few reasons.
The first thing that goes through my mind is incentive-caused bias. What is the incentive for action? Is there an agency gap where the outcome from the person doing the intervention is disconnected from the outcome for the person experiencing it?
Another big reason: Lack of clear feedback loops between action and outcome. It's hard to know you're causing harm if you can't trace action to outcome. This allows, even encourages, some self-delusion. Given that we are prone to confirming our beliefs — and presumably we took action because we believed it to be helpful — we're unlikely to see evidence that contradicts our beliefs. We should be seeking disconfirming evidence to our actions. But we don't, because if we did, we'd realize we are a lot less smart than we think we are.
And the third major contributor, I'd say, is our bias for action (especially what we consider positive action). This is also known as, to paraphrase Chalie Munger, "do something syndrome." If you're a policy advisor or politician, or heck, even a modern office worker, social norms make it hard for you to say, "I don't know." You're expected to have an answer for everything.
Think about how a typical meeting starts. In response to a new product from a competitor, for example, the first question people usually ask is "What are we going to do about this?" The hidden assumption that goes unexplored is that you need to do something. It could be that the cost of doing something outweighs the benefits.
Medicine has known about iatrogenics since at least the fourth century before our era — primum non nocere ("first do no harm") is a first principle attributed to Hippocrates and integrated in the so called Hippocratic Oath taken by every medical doctor on his commencement day…
The very notion of iatrogenics is quite absent from the discourse outside medicine (which, to repeat, has been a rather slow learner.) [Antifragile]
The concept applies to domains outside of medicine and relates to everything where we cause more harm than good under the guise of knowledge.
More from Farnam Street...
- The Bed of Procrustes — 20 aphorisms from Nassim Taleb
- Nassim Taleb offers 5 ways to have a great day
- Biology, antifragility, and nature
THE WEEK'S AUDIOPHILE PODCASTS: LISTEN SMARTER
- The 10 best networking tips for people who hate networking
- Why the West should let Russia have eastern Ukraine
- What would a U.S.-Russia war look like?
- Why you should stop believing in evolution
- Your literary playlist: A guide to the music of Haruki Murakami
- 11 scientific studies that will restore your faith in humanity
- 7 grammar rules you really should pay attention to
- Why the West should accept ISIS as a sovereign nation
- 9 Harvard dropouts who became fabulously successful
- Why baseball is America's most dangerous spectator sport
Subscribe to the Week