The new civil rights frontier: artificial intelligence
Experts worry that AI could further inequality and discrimination
Artificial intelligence is continuing to grow in many industries. It is expected to replace 85 million jobs globally by 2025 as well as potentially generate 97 million new roles, according to the Future of Jobs Report 2020 from the World Economic Forum. However, the growth of artificial intelligence is shedding light on another problem: a lack of diversity within its components. AI is trained with existing data, but much of the data excludes women and people of color, raising questions as to whether the technology can be properly applied across the board.
How can AI be biased?
AI has to build up its knowledge base through machine learning, essentially training the technology by feeding it data. The problem is much of our pre-existing data excludes a vast number of people, namely women and minorities. The most poignant example is in health data, where "80% or more of clinical trials have historically relied on the western population when it comes to patient recruitment," Harsha Rajasimha, founder and executive chairman of the Indo-US Organization for Rare Diseases, a nonprofit that studies rare diseases, told MedTech Intelligence.
Many worry that biases will become inherently ingrained into AI systems. "If you mess this up, you can really, really harm people by entrenching systemic racism further into the health system," Mark Sendak, a lead data scientist at the Duke Institute for Health Innovation, told NPR. This problem may already be in the works, as there have already been instances where facial recognition software was unable to identify Black faces. "The impact on minority communities — especially the Black community — is not considered until something goes wrong," California Rep. Barbara Lee (D) said during a panel at the annual Congressional Black Caucus legislative conference. For example, AI technology could inadvertently discriminate between white and Black job applicants based on previous data on job hirings.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
How can it be fixed?
The biases within artificial intelligence often reflect the biases of humanity as a whole. "Our propensity to think fast and fill in the blanks of information by generalizing and jumping to conclusions explains the ubiquity of biases in any area of social life," wrote Fast Company. This translates into much of the data we have on Earth — and therefore what gets imprinted onto AI. "AI can only be unbiased if it learns from unbiased data, which is notoriously hard to come by," Fast Company added. Even if an AI algorithm is created to be unbiased, "it doesn’t mean that the AI won’t find other ways to introduce biases into its decision-making process," Vox wrote.
The good news is that experts believe that this is a problem that can be solved. "Even though the early systems before people figured out these techniques certainly reinforced bias, I think we can now explain that we want a model to be unbiased, and it’s pretty good at that," Sam Altman, the founder of OpenAI, told Rest of World. "I’m optimistic that we will get to a world where these models can be a force to reduce bias in society, not reinforce it."
Some programs are trying to get ahead of the curve; There is a new AI model called Latimer that "deeply incorporates cultural and historical perspectives of Black and Brown communities," Forbes reported. "We are establishing the building blocks of what the future of AI needs to include, and in doing so, we are working to create an equitable and necessary layer of technology that can be utilized by all demographics," Latimer Founder and CEO John Pasmore told Forbes. Most experts agree that AI has a lot of potential to do good in a number of industries as long as actions are taken to consider the pitfalls. "AI is not bad for diversity — if diversity is part of the design itself," Fast Company concluded.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
Why is Putin 'de-exonerating' Stalin's victims?
Under the radar Russian president has 'insatiable impulse' to 'rewrite history', say commentators
By Chas Newkey-Burden, The Week UK Published
-
Shahnaz Habib's 6 favorite books that explore different cultures
Feature The essayist and translator recommends works by Vivek Shanbhag, Adania Shibli, and more
By The Week US Published
-
'Why is the expansion of individual autonomy necessarily always good?'
Instant Opinion Opinion, comment and editorials of the day
By Justin Klawans, The Week US Published
-
What Trump's win could mean for Big Tech
Talking Points The tech industry is bracing itself for Trump's second administration
By Theara Coleman, The Week US Published
-
Google Maps gets an AI upgrade to compete with Apple
Under the Radar The Google-owned Waze, a navigation app, will be getting similar upgrades
By Justin Klawans, The Week US Published
-
Is ChatGPT's new search engine OpenAI's Google 'killer'?
Talking Point There's a new AI-backed search engine in town. But can it stand up to Google's decades-long hold on internet searches?
By Theara Coleman, The Week US Published
-
Teen suicide puts AI chatbots in the hot seat
In the spotlight A Florida mom has targeted custom AI chatbot platform Character.AI and Google in a lawsuit over her son's death
By Theara Coleman, The Week US Published
-
'Stunningly lifelike' AI podcasts are here
Under the Radar Users are amazed – and creators unnerved – by Google tool that generates human conversation from text in moments
By Abby Wilson Published
-
OpenAI eyes path to 'for-profit' status as more executives flee
In the spotlight The tension between creating technology for humanity's sake and collecting a profit is coming to a head for the creator of ChatGPT
By Theara Coleman, The Week US Published
-
Microsoft's Three Mile Island deal: How Big Tech is snatching up nuclear power
In the spotlight The company paid for access to all the power made by the previously defunct nuclear plant
By Theara Coleman, The Week US Published
-
How will the introduction of AI change Apple's iPhone?
Today's Big Question 'Apple Intelligence' is set to be introduced on the iPhone 16 as part of iOS 18
By Justin Klawans, The Week US Published