If we can, we will—whether or not we should. That’s pretty much the story of human beings and technology. Our species has a defining genius for creating new tools, but it races far ahead of our ability to think through the moral and practical consequences of our creations. This was true when the frontier of technology was the gun, and the automobile, and the atomic bomb. It is even truer today. Computer processing power is still doubling every 18 months, with quantum computers on the way; digital technology is evolving at such breakneck speed that it is quite impossible to foresee what our devices will be able to do—and how they will change our lives—10 years from now, let alone 20. Millions of very smart people are working long hours to create this unimaginable future, but virtually no one is asking: Should we go there? What happens if we do?
To ask such questions, I know, is to be hopelessly naïve. It’s as naïve as being shocked to learn that the government now has the capacity to obtain and analyze virtually all phone and Internet metadata. It could, so it did. And when we can, we will: implant chips in our brains to make us smarter; bioengineer babies to have desirable traits; make robots that replace millions of workers and even do the killing on the battlefield; create data-analysis and biometric identification systems so sophisticated that privacy and anonymity will be utterly moot. Futurist Ray Kurzweil predicts that by the year 2045, we and our computers will merge, and “there will be no distinction between man and machine.” Should we go there? When we can, we will.