How much freedom would you be willing to give up to live a life nearly free of risk?
I found myself pondering that question as I perused a recent article in The Washington Post that reads in places like a press release from the advocacy group Mothers Against Drunk Driving. The piece reports on a provision of the $1 trillion infrastructure bill President Biden is set to sign next week. It will mandate automobile manufacturers install technology in new cars "to stop drunk people from driving" using breath or blood sensors or discreet cameras looking for signs of impairment.
According to the National Highway Traffic Safety Administration, more than 10,000 people in 2019 died in crashes involving an alcohol-impaired driver. Nearly all of those people, and similar numbers every other year, would have been saved in a world where such technology was present in all cars. What's not to love?
I admit that, at first sight, it's difficult to say — but that's because I, like you, have grown accustomed to living in a risk-averse society, so much so that any rule or regulation that increases safety and protects people from the risk of injury and death seems self-evidently sound. But is it? How much freedom and responsibility should we be willing to surrender in order to limit risk?
That's a question we very much need to begin thinking through — because as technology advances, especially in the direction of artificial intelligence (AI), the possibilities for technological oversight and coercion of our actions will only increase, leaving us tempted to embrace these innovations without pondering their furthest-reaching implications.
It's one thing to think, as most of us do, that cars should include passive devices intended to protect drivers and passengers from severe injury and death. Seat belts and air bags work in this way — in the background, doing their work of protection only in the event of an accident. They don't forestall the accident from happening in the first place. A computer monitoring system that looks for evidence of impairment and then locks out the driver when inebriation is detected is very different. In such cases, the technology overrides the driver's agency and stands in for his or her defective judgment. The machine makes the decision — like a parent or guardian assigned to babysit the human driver.
I've never driven drunk in my life, yet I bristle at the thought of my car monitoring my actions and potentially overruling my judgment and decisions. How many present-day Americans will feel the same way? I'm unsure — but I'm convinced the number would have been quite a bit higher in the past, and not only because people prior to the last few decades couldn't have envisioned technology sophisticated enough to perform such a function. The very idea that it would be reasonable to go to such lengths to protect ourselves against risk would have seemed absurd.
The United States is a country populated by immigrants, nearly all of whom opted to embrace considerable risk in leaving their homes to come here and make a life for themselves. Many came with little or nothing. Those who settled in the wilderness faced innumerable hardships and dangers. Those who chose to live in cities confronted filth and disease and often worked in perilous occupations to make ends meet.
That was the context for the rise of the regulatory state over the course of the 20th century. The government came to monitor the health and safety of the workplace, food, drugs, transportation, and consumer products — but only after a change in widely shared assumptions. For the first time, it became normal to think that the state should be empowered to protect citizens from the risk of harm, injury, and death. Instead of life itself being considered risky and misfortunes an unavoidable fact of living, we began to imagine a world in which the gravest dangers could be mitigated, consigned to the margins of existence.
European countries developed similar forms of regulation, but with noteworthy supplements and variations. In Germany, for example, various forms of personal insurance are an extremely popular way of mitigating risk. If something bad happens, you can make a claim and be compensated. But notice this assumes the risk itself can't be eliminated. Germans live with that reality and then try to protect themselves from its worst consequences.
Americans buy insurance, too, but in many cases mainly because we are required to carry it — on our homes, on our cars. The health insurance provided by employers and the government through Medicare and Medicaid or purchased by individuals doesn't count in most cases, because it functions less like insurance rightly understood (payment of a premium in return for protection against a specified risk and potential loss) than as an institution mediating between consumers (patients) and those providing health services (doctors, hospitals), including the negotiation of prices for those services.
Instead of relying on insurance to mitigate the consequences of risk, Americans have opted to seek the elimination of risk, to insulate ourselves as much as possible from exposure to danger. Hence the hesitancy of blue states to ease up on public-health restrictions surrounding the COVID-19 pandemic, even with vaccination rates in those parts of the country running quite high. Many public schools and private universities combine mask mandates with vaccine mandates, even though the latter renders the former almost entirely superfluous.
The most vaccine-hesitant parts of the country might seem more willing to accept risk, but that is deceptive. What is the rejection of a vaccine if not a refusal to accept the risk of trying it? Anti-vaxxers are only more willing than other Americans to live with the risk of catching COVID because they foolishly believe it's riskier to take one of the vaccines.
Will most Americans make a similar calculation when it comes to the drunk-driving mandate in the infrastructure bill? Will they accept the trade-off, giving up some of their personal freedom and responsibility in return for the promise of living in a country where they never need fear being injured or killed by a drunk driver? And as technological advances continue and accelerate, will they be willing to go even further, giving over ever greater parts of their lives to the oversight of machines programed to act with our best interests in mind (with those interests defined in terms of the avoidance of physical harm)?
We may well answer in the affirmative. And perhaps that's the right call. Maybe allowing our decisions to be overseen and countermanded by powers beyond our control is worth the cost. I just wish I saw more signs that we were thinking deeply about the stakes and pondering their ramifications for our humanity before taking the plunge.