The new civil rights frontier: artificial intelligence
Experts worry that AI could further inequality and discrimination
Artificial intelligence is continuing to grow in many industries. It is expected to replace 85 million jobs globally by 2025 as well as potentially generate 97 million new roles, according to the Future of Jobs Report 2020 from the World Economic Forum. However, the growth of artificial intelligence is shedding light on another problem: a lack of diversity within its components. AI is trained with existing data, but much of the data excludes women and people of color, raising questions as to whether the technology can be properly applied across the board.
How can AI be biased?
AI has to build up its knowledge base through machine learning, essentially training the technology by feeding it data. The problem is much of our pre-existing data excludes a vast number of people, namely women and minorities. The most poignant example is in health data, where "80% or more of clinical trials have historically relied on the western population when it comes to patient recruitment," Harsha Rajasimha, founder and executive chairman of the Indo-US Organization for Rare Diseases, a nonprofit that studies rare diseases, told MedTech Intelligence.
Many worry that biases will become inherently ingrained into AI systems. "If you mess this up, you can really, really harm people by entrenching systemic racism further into the health system," Mark Sendak, a lead data scientist at the Duke Institute for Health Innovation, told NPR. This problem may already be in the works, as there have already been instances where facial recognition software was unable to identify Black faces. "The impact on minority communities — especially the Black community — is not considered until something goes wrong," California Rep. Barbara Lee (D) said during a panel at the annual Congressional Black Caucus legislative conference. For example, AI technology could inadvertently discriminate between white and Black job applicants based on previous data on job hirings.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
How can it be fixed?
The biases within artificial intelligence often reflect the biases of humanity as a whole. "Our propensity to think fast and fill in the blanks of information by generalizing and jumping to conclusions explains the ubiquity of biases in any area of social life," wrote Fast Company. This translates into much of the data we have on Earth — and therefore what gets imprinted onto AI. "AI can only be unbiased if it learns from unbiased data, which is notoriously hard to come by," Fast Company added. Even if an AI algorithm is created to be unbiased, "it doesn’t mean that the AI won’t find other ways to introduce biases into its decision-making process," Vox wrote.
The good news is that experts believe that this is a problem that can be solved. "Even though the early systems before people figured out these techniques certainly reinforced bias, I think we can now explain that we want a model to be unbiased, and it’s pretty good at that," Sam Altman, the founder of OpenAI, told Rest of World. "I’m optimistic that we will get to a world where these models can be a force to reduce bias in society, not reinforce it."
Some programs are trying to get ahead of the curve; There is a new AI model called Latimer that "deeply incorporates cultural and historical perspectives of Black and Brown communities," Forbes reported. "We are establishing the building blocks of what the future of AI needs to include, and in doing so, we are working to create an equitable and necessary layer of technology that can be utilized by all demographics," Latimer Founder and CEO John Pasmore told Forbes. Most experts agree that AI has a lot of potential to do good in a number of industries as long as actions are taken to consider the pitfalls. "AI is not bad for diversity — if diversity is part of the design itself," Fast Company concluded.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
Why Britain is struggling to stop the ransomware cyberattacksThe Explainer New business models have greatly lowered barriers to entry for criminal hackers
-
Greene’s rebellion: a Maga hardliner turns against TrumpIn the Spotlight The Georgia congresswoman’s independent streak has ‘not gone unnoticed’ by the president
-
Crossword: October 26, 2025The Week's daily crossword puzzle
-
AI is making houses more expensiveUnder the radar Homebuying is also made trickier by AI-generated internet listings
-
‘How can I know these words originated in their heart and not some data center in northern Virginia?’instant opinion Opinion, comment and editorials of the day
-
AI: is the bubble about to burst?In the Spotlight Stock market ever-more reliant on tech stocks whose value relies on assumptions of continued growth and easy financing
-
Your therapist, the chatbotFeature Americans are increasingly turning to artificial intelligence for mental health support. Is that sensible?
-
Supersized: The no-limit AI data center build-outFeature Tech firms are investing billions to build massive AI data centers across the U.S.
-
Digital addiction: the compulsion to stay onlineIn depth What it is and how to stop it
-
AI workslop is muddying the American workplaceThe explainer Using AI may create more work for others
-
Prayer apps: is AI playing God?Under The Radar New chatbots are aimed at creating a new generation of believers
