The new civil rights frontier: artificial intelligence
Experts worry that AI could further inequality and discrimination


Artificial intelligence is continuing to grow in many industries. It is expected to replace 85 million jobs globally by 2025 as well as potentially generate 97 million new roles, according to the Future of Jobs Report 2020 from the World Economic Forum. However, the growth of artificial intelligence is shedding light on another problem: a lack of diversity within its components. AI is trained with existing data, but much of the data excludes women and people of color, raising questions as to whether the technology can be properly applied across the board.
How can AI be biased?
AI has to build up its knowledge base through machine learning, essentially training the technology by feeding it data. The problem is much of our pre-existing data excludes a vast number of people, namely women and minorities. The most poignant example is in health data, where "80% or more of clinical trials have historically relied on the western population when it comes to patient recruitment," Harsha Rajasimha, founder and executive chairman of the Indo-US Organization for Rare Diseases, a nonprofit that studies rare diseases, told MedTech Intelligence.
Many worry that biases will become inherently ingrained into AI systems. "If you mess this up, you can really, really harm people by entrenching systemic racism further into the health system," Mark Sendak, a lead data scientist at the Duke Institute for Health Innovation, told NPR. This problem may already be in the works, as there have already been instances where facial recognition software was unable to identify Black faces. "The impact on minority communities — especially the Black community — is not considered until something goes wrong," California Rep. Barbara Lee (D) said during a panel at the annual Congressional Black Caucus legislative conference. For example, AI technology could inadvertently discriminate between white and Black job applicants based on previous data on job hirings.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
How can it be fixed?
The biases within artificial intelligence often reflect the biases of humanity as a whole. "Our propensity to think fast and fill in the blanks of information by generalizing and jumping to conclusions explains the ubiquity of biases in any area of social life," wrote Fast Company. This translates into much of the data we have on Earth — and therefore what gets imprinted onto AI. "AI can only be unbiased if it learns from unbiased data, which is notoriously hard to come by," Fast Company added. Even if an AI algorithm is created to be unbiased, "it doesn’t mean that the AI won’t find other ways to introduce biases into its decision-making process," Vox wrote.
The good news is that experts believe that this is a problem that can be solved. "Even though the early systems before people figured out these techniques certainly reinforced bias, I think we can now explain that we want a model to be unbiased, and it’s pretty good at that," Sam Altman, the founder of OpenAI, told Rest of World. "I’m optimistic that we will get to a world where these models can be a force to reduce bias in society, not reinforce it."
Some programs are trying to get ahead of the curve; There is a new AI model called Latimer that "deeply incorporates cultural and historical perspectives of Black and Brown communities," Forbes reported. "We are establishing the building blocks of what the future of AI needs to include, and in doing so, we are working to create an equitable and necessary layer of technology that can be utilized by all demographics," Latimer Founder and CEO John Pasmore told Forbes. Most experts agree that AI has a lot of potential to do good in a number of industries as long as actions are taken to consider the pitfalls. "AI is not bad for diversity — if diversity is part of the design itself," Fast Company concluded.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
Today's political cartoons - May 6, 2025
Cartoons Tuesday's cartoons - rare earth minerals, rising prices, and more
-
What to know about Real IDs, America's new identification cards
The Explainer People without a Real ID cannot board a commercial flight as of May 7, 2025
-
Where is the left-wing Reform?
Today's Big Question As the Labour Party leans towards the right, progressive voters have been left with few alternatives
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises
-
OpenAI's new model is 'really good' at creative writing
Under the Radar CEO Sam Altman says he is impressed. But is this merely an attempt to sell more subscriptions?