Something uncanny happened this Sunday. As we were shuffling out the door, we looked at my phone's notification screen. It was telling us how long the drive would be to Norwalk, Conn., based on the current traffic.
Gadget geeks will recognize this as part of iPhone's "frequent location service." Norwalk is where we go to church. That phone has traveled to Norwalk all but one of the Sunday mornings of its activated existence. It just so happened that I was worrying about whether we would be late when the notification popped up. For that reason, it was welcome. But that welcoming feeling — part of a pattern of social engineering perpetuated by technology — also troubled me.
In an odd way, this episode was a preview of how digital technology could become more pervasive and influential in our lives, while becoming subtler in its execution. As our machines and social networks learn from our personal habits, we demand this kind of automation and intelligence. When you enter a store, the owners would love to give you little reminders that the ketchup in your refrigerator has expired. It wouldn't be that difficult to pull off, if you correlated the bar code, the date and time of purchase, and the credit card. It could even be a sign that flashes just above the ketchup bottles themselves, using a little near-field communication, or NFC, trickery and a secure connection.
Phones could study my behavior over years and remind me that I typically rent a camera lens for Christmas, and so send me a coupon over Thanksgiving. Or worse, the manufacturer could raise the price of the rental knowing that my personal demand is likely to go up in that season. All it would take is a few APIs (application programming interfaces), and some strategically placed legal language in the terms that accompany the first point of purchase.
Apple is adding fitness apps that can communicate with doctors. It's not hard to imagine the phone sending you notifications on new clothing for your fitter figure, complete with flattery about your discipline, effort, and worthiness.
Right now, all my iPhone knows is that I go to Norwalk on Sunday mornings. With just a little more refinement — like smarter access to my Facebook and Web browser — my phone could figure out that I'm going to a Latin Mass in Norwalk. It could automatically crawl the parish's webpage and add to my calendar the appropriate mass on holy days of obligation that fall midweek. It could give me a nudge early about traffic on the Merritt Parkway, which is infallibly worse on weekdays.
But would it?
It's hard to imagine Apple neglecting to send reminders for reputable lifestyle choices like a weekly workout, but what about disreputable ones? What if an iPhone user were regularly going to an extremist political meeting? Could Apple's "helpful" urgings and instructions make the company liable for incitement, if that user goes on to commit a crime?
That's an extreme case, but the point is that one can easily imagine engineers forbidding their products from "helping" their customers turn against social causes dear to the heart of Apple's executives. Say goodbye to automatic reminders about the "wrong" sort of churchgoing. Or the wrong politicking.
The very seamless way digital technologies work also grants them an authority that invites abuse. Google's search algorithm contains and disperses a tremendous amount of cultural cache. But, rightfully, Google search's auto-complete suggestions do not include obscenities or references to pornography.
More worryingly, Google sometimes apparently makes individual interventions in the algorithm for political and business reasons. Google has been criticized for occasionally complying with authoritarian states in screening out certain Web content from their searches. I can even remember, as John Carney once noted, when three-time presidential candidate Patrick J. Buchanan's name was mysteriously absent from Google's auto-complete suggestions, almost certainly the work of a single line of code, likely written by a single Buchanan-hating coder. That last example shows that some coders could easily act out passive-aggressive vendettas against entire classes of people.
Could Facebook make middle-class Mormon Latinos who like It's Always Sunny in Philadelphia statistically more likely to drink alcohol with the proper social prompts? Probably. Facebook's own social experiment of seeing if it could, with a few tweaks, make its users feel more unpleasant and angry was instructive.
People often complain of the liberal bias of news coverage. But Silicon Valley has a culture and biases, too.
Imagine the more libertarian, atheistic, pro-business bias of Silicon Valley penetrating into the emotional and intellectual life of your social networks, into the push reminders on your phone, into what shows up on your calendar and search results. Algorithms and programs have human authors just as news stories do. And these authors, being human, are just as likely to confuse their own convictions with objective reality and their own selfish biases with beneficence. Perhaps more so because they are richer and held in higher esteem than reporters.
But their influence is harder to perceive. The mathematical formulas and data sets upon which they work are taken to be authorities themselves, however biased their construction.
Google still occasionally gets the blame for the embarrassing search habits of humanity, and is under ever more pressure to push and pull it, according to special interests or ideology. It seems that the major technology companies will play as large or even a larger role in the culture wars than the government does currently. Nancy Pelosi and Rand Paul cannot yet add events to my calendar, or control the books I peruse on digital shelves. The federal government controls where tax dollars flow, but a few companies and social networks have the ability to sluice the far more powerful currents of social status.