Artificial intelligence: The ethics of algorithms
Lawmakers are gearing up for some of the first efforts to regulate artificial intelligence, said Karen Hao in the MIT Technology Review—and “there are likely to be more to come.” AI algorithms can now determine what content you see online, whether you get a loan, and even whether a convict is granted parole. Handing over decisions to AI can result in unexpected and troubling discrimination. In one test of job ads on Facebook, “postings for preschool teachers and secretaries, for example, were shown to a higher fraction of women, while postings for janitors and taxi drivers were shown to a higher proportion of minorities”—even though the researchers hadn’t asked Facebook to discriminate. The Algorithmic Accountability Act, introduced last week by Democrats in the House and Senate, would require big companies to “audit their machine-learning systems for bias and discrimination.” But these are not easy problems to fix, because they raise fundamental questions of fairness. For instance, in evaluating parole, does fairness mean “the same proportion of black and white individuals should get high-risk assessment scores? Or that the same level of risk should result in the same score regardless of race?”
Letting big tech companies answer these questions themselves will never work, said Turing Award winner Yoshua Bengio in Nature.com. “AI can amplify discrimination and biases, such as gender or racial discrimination, because those are present in the data the technology is trained on, reflecting people’s behavior.” The companies building AI systems don’t have the incentive to fix this, so “the dangers of abuse are very real.” In fact, companies that put limitations on AI to “follow ethical guidelines would be disadvantaged with respect to the companies that do not.” So counting on the tech industry to regulate itself is like relying on “voluntary taxation.” Even if the industry is willing to take on the issues, said Parmy Olson in The Wall Street Journal, self-governance poses serious challenges. Google has disbanded not one but two “high-profile, global, independent ethics councils” over protests from outside and disagreements with its own AI research unit.
Don’t let that make you think that AI experts aren’t just as concerned about fairness as legislators and regulators, said Ariel Procaccia in Bloomberg.com. “Practical ideas for ensuring that artificial intelligence is ethical and fair are gushing from inside the tech profession.” AI specialists are considering dozens of proposals to “spell out what fairness means in mathematical terms.” These days, key papers in machine learning routinely grapple with ethical questions. “The perception that AI researchers and developers care more about algorithms and robots than about people is misguided.”
Getty, courtesy of Northwestern University ■