The threat of bias in the latest wave of generative artificial intelligence may be in the spotlight lately, but social media algorithms already have discrimination problems. Some creators from marginalized communities have expressed frustration with how the algorithms appeared biased against them, robbing them of critical engagement.
How do social media algorithms discriminate against some creators?
While content that doesn't violate any explicit terms can't be outright banned, social media companies still have ways of suppressing the work of some creators. Shadow-bans are "a form of online censorship where you're still allowed to speak, but hardly anyone gets to hear you," The Washington Post explained. Their content might not be removed, but some creators notice that engagement with their posts plummets outside of their immediate friends. "Even more maddening, no one tells you it's happening," the Post added.
Content creators have long decried the lack of transparency with shadow-bans. Late last year, the practice made headlines when Twitter owner Elon Musk released the Twitter Files, internal company documents intended to show how "shadow-banning was being used to suppress conservative views," the Post said.
Shadow-banning is a form of algorithmic bias that disproportionately affects specific demographics because the "unconscious biases of the developers are embedded in the systems they create," Annie Brown wrote for Forbes. Additionally, "algorithms are trained by data gathered from human history, a history replete with violence, inequity, bias and cruelty," Brown posited. Shadow-bans are "just one symptom of the inherent bias, racism and marginalization algorithms have detected and AI has co-opted," Brown opined. "Seen this way, AI, under the guise of observation and platform moderation, has embedded our cultural biases and threatens to perpetuate discriminatory human behavior."
Who has accused social media companies of algorithmic bias?
Black creators have been speaking out about their content being suppressed since TikTok was accused of suppressing the content of Black creators during the George Floyd protests in 2020. The company later released a statement apologizing for a "technical glitch" that made it temporarily appear that "posts uploaded using #BlackLivesMatter and #GeorgeFloyd would receive 0 views." Some creators alleged that their engagement still went down after posting content with those hashtags.
The following year, creators pointed out that terms like "Black Lives Matter'' and "Black people" were flagged as inappropriate by the automated moderation system. In contrast, words like "white supremacy" or "white success" did not trigger a warning. Black dancers and choreographers also alleged that TikTok's recommendation algorithm prioritized white creators who copied their dances without giving them credit. This eventually led them to have a content strike on the platform that year.
LGBTQ+ content creators have also raised concerns about their posts being taken down with little to no explanation, a practice labeled as "the digital closet" by researcher and author Alexander Monea in his book of the same name. For his book about the overpolicing of LGBTQ-centered online spaces, Monea spent two years looking through data and collecting anecdotes from LGBTQ+ social media users who reported "being censored, silenced or demonetized," ABC News explained.
"Once the internet is largely controlled by a very few companies that all use an advertising model to drive their revenue, what you get is an overpoliced sort of internet space," Monea told ABC's "Perspective" podcast.
When Tumblr adopted an adult content ban in 2018, reports that the ban disproportionately affected LGBTQ+ users led to an investigation by New York City's Commission on Human Rights. In an interview, Monea said the "automated content moderation algorithms that Tumblr implemented to help institute its new ban" was "comically inept but with tragic consequences." Many LGBTQ+ users lost all of their content "with no redress and no way to recover their lost content or user base," Monea added.
In 2022, Tumblr reached a settlement with New York City's Commission on Human Rights after an investigation was launched into the allegations of discrimination against LGBTQ+ users. The settlement required the platform to "revise its user appeals process and train its human moderators on diversity and inclusion issues, as well as review thousands of old cases and hire an expert to look for potential bias in its moderation algorithms," The Verge summarized.
How do creators cope with algorithmic bias?
To avoid the looming threat of shadow-banning, some content creators have taken to using workarounds "such as not using certain images, keywords or hashtags or by using a coded language known as algospeak," the Post explained.
"There's a line we have to toe; it's an unending battle of saying something and trying to get the message across without directly saying it," TikTok creator Sean Szolek-VanValkenburgh told the Post. "It disproportionately affects the LGBTQIA community and the BIPOC community because we're the people creating that verbiage and coming up with the colloquiums."
Some creators have attempted to fight against social media companies accused of discriminatory moderation with lawsuits. However, "bias allegations against social media platforms have rarely succeeded in court," The Verge noted. YouTube won two lawsuits from LGBTQ+ and Black video creators who alleged algorithmic discrimination.