If we are to intervene in what would otherwise happen, we need an idea of not only the benefits of our interventions but also the harm. Otherwise how will we know when, despite our best intentions, we cause more harm than we do good?
Intervening when we have no idea of the break-even point is "naive interventionism," a phrase first brought to my attention by Nassim Taleb.
In Antifragile, he writes:
In the case of tonsillectomies, the harm to the children undergoing unnecessary treatment is coupled with the trumpeted gain for some others. The name for such net loss, the (usually bitten or delayed) damage from treatment in excess of the benefits, is iatrogenics, literally, "caused by the healer," iatros being a healer in Greek. [Antifragile]
Why would people do something even when the evidence points out that doing something is actually causing more harm?
I can think of a few reasons.
The first thing that goes through my mind is incentive-caused bias. What is the incentive for action? Is there an agency gap where the outcome from the person doing the intervention is disconnected from the outcome for the person experiencing it?
Another big reason: Lack of clear feedback loops between action and outcome. It's hard to know you're causing harm if you can't trace action to outcome. This allows, even encourages, some self-delusion. Given that we are prone to confirming our beliefs — and presumably we took action because we believed it to be helpful — we're unlikely to see evidence that contradicts our beliefs. We should be seeking disconfirming evidence to our actions. But we don't, because if we did, we'd realize we are a lot less smart than we think we are.
And the third major contributor, I'd say, is our bias for action (especially what we consider positive action). This is also known as, to paraphrase Chalie Munger, "do something syndrome." If you're a policy advisor or politician, or heck, even a modern office worker, social norms make it hard for you to say, "I don't know." You're expected to have an answer for everything.
Think about how a typical meeting starts. In response to a new product from a competitor, for example, the first question people usually ask is "What are we going to do about this?" The hidden assumption that goes unexplored is that you need to do something. It could be that the cost of doing something outweighs the benefits.
Medicine has known about iatrogenics since at least the fourth century before our era — primum non nocere ("first do no harm") is a first principle attributed to Hippocrates and integrated in the so called Hippocratic Oath taken by every medical doctor on his commencement day…
The very notion of iatrogenics is quite absent from the discourse outside medicine (which, to repeat, has been a rather slow learner.) [Antifragile]
The concept applies to domains outside of medicine and relates to everything where we cause more harm than good under the guise of knowledge.
More from Farnam Street...
- The Bed of Procrustes — 20 aphorisms from Nassim Taleb
- Nassim Taleb offers 5 ways to have a great day
- Biology, antifragility, and nature
THE WEEK'S AUDIOPHILE PODCASTS: LISTEN SMARTER
- How academia's liberal bias is killing social science
- Diagnosing the Home Alone burglars' injuries: A professional weighs in
- How Wall Street is chipping away at reform
- Why Pakistan won't hunt down the terrorists within its borders
- 43 TV shows to watch in 2014
- 10 things you need to know today: December 21, 2014
- How to be the most productive person in your office — and still get home by 5:30 p.m.
- How I lost all my money
- A brief history of the Christmas present
- Vox, derp, and the intellectual stagnation of the left
Subscribe to the Week