The new civil rights frontier: artificial intelligence
Experts worry that AI could further inequality and discrimination
Artificial intelligence is continuing to grow in many industries. It is expected to replace 85 million jobs globally by 2025 as well as potentially generate 97 million new roles, according to the Future of Jobs Report 2020 from the World Economic Forum. However, the growth of artificial intelligence is shedding light on another problem: a lack of diversity within its components. AI is trained with existing data, but much of the data excludes women and people of color, raising questions as to whether the technology can be properly applied across the board.
How can AI be biased?
AI has to build up its knowledge base through machine learning, essentially training the technology by feeding it data. The problem is much of our pre-existing data excludes a vast number of people, namely women and minorities. The most poignant example is in health data, where "80% or more of clinical trials have historically relied on the western population when it comes to patient recruitment," Harsha Rajasimha, founder and executive chairman of the Indo-US Organization for Rare Diseases, a nonprofit that studies rare diseases, told MedTech Intelligence.
Many worry that biases will become inherently ingrained into AI systems. "If you mess this up, you can really, really harm people by entrenching systemic racism further into the health system," Mark Sendak, a lead data scientist at the Duke Institute for Health Innovation, told NPR. This problem may already be in the works, as there have already been instances where facial recognition software was unable to identify Black faces. "The impact on minority communities — especially the Black community — is not considered until something goes wrong," California Rep. Barbara Lee (D) said during a panel at the annual Congressional Black Caucus legislative conference. For example, AI technology could inadvertently discriminate between white and Black job applicants based on previous data on job hirings.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
How can it be fixed?
The biases within artificial intelligence often reflect the biases of humanity as a whole. "Our propensity to think fast and fill in the blanks of information by generalizing and jumping to conclusions explains the ubiquity of biases in any area of social life," wrote Fast Company. This translates into much of the data we have on Earth — and therefore what gets imprinted onto AI. "AI can only be unbiased if it learns from unbiased data, which is notoriously hard to come by," Fast Company added. Even if an AI algorithm is created to be unbiased, "it doesn’t mean that the AI won’t find other ways to introduce biases into its decision-making process," Vox wrote.
The good news is that experts believe that this is a problem that can be solved. "Even though the early systems before people figured out these techniques certainly reinforced bias, I think we can now explain that we want a model to be unbiased, and it’s pretty good at that," Sam Altman, the founder of OpenAI, told Rest of World. "I’m optimistic that we will get to a world where these models can be a force to reduce bias in society, not reinforce it."
Some programs are trying to get ahead of the curve; There is a new AI model called Latimer that "deeply incorporates cultural and historical perspectives of Black and Brown communities," Forbes reported. "We are establishing the building blocks of what the future of AI needs to include, and in doing so, we are working to create an equitable and necessary layer of technology that can be utilized by all demographics," Latimer Founder and CEO John Pasmore told Forbes. Most experts agree that AI has a lot of potential to do good in a number of industries as long as actions are taken to consider the pitfalls. "AI is not bad for diversity — if diversity is part of the design itself," Fast Company concluded.
Create an account with the same email registered to your subscription to unlock access.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
Major League Baseball is facing an epidemic of pitcher's injuries
Under the Radar Many insiders are blaming the pitch clock for the rise in injuries — but the league is not so sure
By Justin Klawans, The Week US Published
-
8 movie musicals that prove the screen can share the stage
The Week Recommends The singing and dancing, bigger than life itself
By Scott Hocker, The Week US Published
-
2024 Mother's Day Gift Guide
The Week Recommends A present for every mom
By Catherine Garcia, The Week US Published
-
Is the AI bubble deflating?
Today's Big Question Growing skepticism and high costs prompt reconsideration
By Joel Mathis, The Week US Published
-
AI is causing concern among the LGBTQ community
In the Spotlight One critic believes that AI will 'always fail LGBTQ people'
By Justin Klawans, The Week US Published
-
When even art is artificial
Opinion The AI threat to human creativity
By William Falk Published
-
The push for media literacy in education amid the rise of AI
In the Spotlight A pair of congresspeople have introduced an act to mandate media literacy in schools
By Justin Klawans, The Week US Published
-
The complex environmental toll of artificial intelligence
The explainer AI is very much mostly not green technology
By Devika Rao, The Week US Published
-
Artificial history
Opinion Google's AI tailored the past to fit modern mores, but only succeeded in erasing real historical crimes
By Theunis Bates Published
-
AI is recreating the voices of mass shooting victims
The Explainer The parents of these victims are using the AI to try and lobby Congress for gun control
By Justin Klawans, The Week US Published
-
The murky world of AI training
Under the Radar Despite public interest in artificial intelligence models themselves, few consider how those models are trained
By Austin Chen, The Week UK Published