There were many genuinely inspiring moments in Google's keynote presentation this week, but it was a small detail that caught my eye. In the midst of its annual I/O conference, a new feature in the Android photos app was shown off: Users who had old black and white photos could now colorize them. What once took hours or even days of painstaking work will soon be able to be done instantly on your phone.
It was indicative of the tone of Google's presentation, probably the most impressive tech keynote that has come along in some time. From digital assistants that can make phone calls to cameras that can recognize street signs, Google was on a mission to wow its audience. It generally succeeded. Yet, with all new shiny tech comes a host of ethical concerns. And simmering underneath the showy presentation was a sense that Google's mission wasn't just to impress, but that the company was finally tackling its biggest enemy: itself.
Few things more strongly highlighted this than the new Google News app. The app is superficially a predictable update: It uses Google's new focus on AI to cater stories to a user's interests, and it also has a headlines section which people can quickly scan to see what's going on in the world. But the app also has a new "Full Coverage" feature, which lets readers see a variety of sources on a specific news story, giving them a bird's eye view of how a story developed and its background.
The feature is clearly a reaction to the problem of "fake news." Google and Facebook have both come under scrutiny for how they have presented news over the past couple of years, whether for amplifying dubious sources or becoming paths to conspiracy theories. Full Coverage is Google's attempt to try and present news "objectively" by drawing on a variety of outlets.
Whether it will have an effect remains to be seen, but it's part of a broader effort by the company to combat problems of its own making. After all, the very fact of fake news, information overload, and the dilution of trust in media has been in part enabled by Google itself — by its search algorithm, its autocomplete controversies, conspiracy theories on YouTube, and more. There is also the more straightforward fact that Google has thus far behaved much like Facebook in that it tries its best to remain neutral, and in doing so has allowed a series of negative effects to flourish, allowing popularity or virality rather than quality to determine how often something is seen.
Indeed, that's the reason behind another announcement: a $300 million initiative to fight fake news and help sustain quality journalism, and even combat informational tampering in elections. Google is also tweaking its own algorithms to help surface authoritative sources in moments when news is breaking to prevent the spread of conspiracy theories — though it should be noted that without tackling YouTube, where many conspiracy theories spread, the initiative can only be a half-measure.
But it isn't just news in which Google is recognizing its responsibility. The newest version of Android will now come with a set of features to help limit smartphone use. For example, an app called Dashboard will let users see how much they've used their phone, how many times they've picked it up, or how much they've used specific apps. In addition, you can also set limits on how much you can use an app after which the app simply won't open or appears in grayscale on your homescreen.
It's an admission by the makers of the world's most popular operating system that there are problems not only in how we use our phones, but the way in which apps and tech in general are designed to induce compulsive behavior. Many people struggle to put down their phones, even when they know it may be harming them. It's not that what happens on screens is inherently flippant — it can at times be vital or life-affirming — but it can be addictive and potentially destructive.
That Google is recognizing its responsibility is promising. Silicon Valley has for far too long taken a libertarian approach to the effects of its own creations — it lets them loose in the world and the effects are simply what they are. But the recent moves from tech companies — not just Google, but also Facebook, Microsoft, and more — suggest that the Valley is realizing it has a role to play in not just making things, but determining how they are used and to what ends. Tech is realizing that it's often its own worst enemy, and is now on the road to trying to fix it.
All the same, it's clear that this new path to responsibility is going to be a rocky one. One of the most jaw-dropping moments from its keynote this week was Google's demonstration of a technology that lets its Assistant make appointments over the phone, mimicking real human speech to do so. It was genuinely impressive tech, as the AI chat bot responded to a real human in ways that seemed almost too fluid to be true — and indeed, Google later clarified that the Assistant only works for now in highly specialized encounters.
But the problem with the demo was clear: The Assistant tried to mimic human speech and thus represented an attempt to fool the person on the other end of the phone. Google even added in "uhhs" and "umms" into its speech patterns to make its voice assistant seem more believable. Yet conflating human and digital speech without clearly demarcating between the two raises a host of ethical questions — not least is the effect on real people whose job it will now be to talk to digital bots all day. This is to say nothing of the moral effects of a world in which humans and digital assistants are indistinguishable.
It was a sign that despite Google's attempts to rectify its own mistakes, it continued to make new ones, not predicting problems that may arise. It was reason for pause in an otherwise promising presentation — a sign that though Google can do amazing things with just the press of a button, it still has a ways to go when it comes to being the kind of responsible company it is claiming to be.