The Germanwings crash, and the folly of risk analysis
A post-9/11 security measure may have contributed to the crash
While the investigation into the tragic crash of a Germanwings plane in the French Alps is not over, some very disturbing news has already come to light. According to French investigators, the co-pilot intentionally crashed the plane after locking the other pilot out of the cockpit and taking over the flight's controls.
The horrifying irony here is that the co-pilot was supposedly able to crash the plane thanks to a post-9/11 security measure: enabling plane cockpits to lock from the inside in the case of a hijacking.
Plane travel is astonishingly safe. We should consistently marvel at the fact that we can take these extraordinary things, enormous and heavy steel machines, and fly them through the air. You are safer flying at hundreds of miles per hour, tens of thousands of feet over the ocean than crossing the street.
That being said, crashes do happen. In most cases, there wasn't systemic negligence, meaning that the cause of the crash could have been anticipated and stopped.
A plane crash will typically be described as a tail risk — a risk that is both vanishingly unlikely and highly damaging — but this is a bit of a misnomer.
The concept of risk is a very recent one, associated with the modern era, and it implies quantification. People have probably known about uncertainty since the dawn of recorded time, but the concept of risk arose when men got the idea that they could measure, and therefore predict and manage, uncertainty.
But, as the economist Frank Knight argued, there is risk, and then there is uncertainty. Risk can be quantified, but not every uncertainty can. And the problems arise when we try to treat uncertainty as risk.
The simple fact of the matter is that not everything can be planned, not everything can be predicted, not everything can be managed. The same cockpit locks that would have prevented 9/11 may have helped cause the Germanwings crash.
The world is simply too complex, and our minds too narrow to quantify everything. But this isn't the real problem. The real problem is that we don't want this to be true. We want control. We want to be able to know everything, to control everything.
In the end, we humans are really prideful monkeys, who not only don't know much, but are too arrogant to realize it — the worst imaginable combination.
This has broad-ranging implications. Famously, Nassim Taleb repackaged the Knightian notion of uncertainty as "black swans" and showed how the self-deluding refusal to treat uncertainty as uncertainty played a key role in the 2008 financial crisis that brought the world economy to its knees. The premise of the financial world that came crashing down in 2008 was that everything was risk and nothing uncertainty, and therefore everything could be measured and managed. When reality refused to cooperate, the world came crashing down.
There is also the belief that central planning is possible. The Nobel Prize-winning economist and philosopher Friedrich Hayek famously called this "the fatal conceit": the belief that anyone can have enough information to centrally plan anything as complex as a human society, a conceit that requires not only delusion but hubris. There are versions of this on both sides of the aisle, by the way: The commonality between the invasion of Iraq and ObamaCare was the idea that as long as you have a bunch of really smart people formulating policy, you can do anything.
We also see this at the personal level: Very often the self-delusion that we can understand and predict the people around us causes us to make grave mistakes and treat our fellow human beings as pawns rather than brethren.
This is why epistemic humility — the stubborn belief that one knows less than one thinks — is such an important virtue. There are unknown unknowns. We should mind Chesterton's Fence.
In the end, our refusal to accept this is a refusal of our own extremely precarious human condition. But shedding our delusions of grandeur is exactly what growing up means.