Ever find yourself saying you want to "delve into" an issue or "navigate the landscape" of it? If so, your use of language might have been influenced by artificial intelligence.
Where once chatbots "learned from human writing", now the "influence may run in the other direction", said The Atlantic. In this way at least, AI may have "already won its campaign for global dominance".
'Robotic undertone' ChatGPT and other large language model tools are "designed to make writing easier by offering suggestions based on patterns in the texts they were trained on", said Ritesh Chugh, an associate professor of information and communications, on The Conversation. And because they are "trained on vast amounts of text from various sources", they "tend to favour" the most commonly used words and phrases in their "outputs".
This is having a clear effect on human language: in the 18 months after ChatGPT was launched, the use of words such as "meticulous", "delve", "realm" and "adept" increased by between 35% and 51%, according to a study by researchers at the Max-Planck Institute for Human Development.
Certain words and phrases are "popping up" everywhere, said Chugh on The Conversation. They may "sound fancy" but their "overuse can make a text sound monotonous and repetitive". It will become trickier to "distinguish between individual voices and perspectives as everything takes on a robotic undertone".
'Losing verbal stumbles' The danger is that "AI is quietly establishing who gets to sound 'legitimate'", said The Verge. Our verbal "imperfections" that "build trust" are at stake. We don't want to "lose the verbal stumbles, regional idioms and off-kilter phrases that display vulnerability, authenticity and personhood".
Another risk is that we may "lose agency over our thinking", said Los Angeles Magazine, and, instead of expressing our own thoughts, articulate whatever AI helps us to articulate.
"In the end, writing should be about expressing your ideas in your own way," said Chugh. "While ChatGPT can help, it's up to each of us to make sure we're saying what we really want to – and not what an AI tool tells us to." |