Instagram hopes that blurring nudity in messages will make teens safer
The option will be turned on by default for users under 18


Parents have long raised concerns over the types of images their children can be exposed to on Instagram, and now the social media brand is taking a new step to try and fight sexual exploitation on the app. In a major change for their interface, Instagram will begin automatically blurring nude images in direct messages.
The change, announced by Instagram's parent company Meta on April 11, is part of a series of new tools designed to minimize child sexual abuse and exploitation across social media brands. The tools will "help protect young people from sextortion and intimate image abuse" and also "make it more difficult for potential scammers and criminals to find and interact with teens," Meta said in a press release. The company is also testing new ways to "help people spot potential sextortion scams."
Instagram has been criticized for not being strict enough about stopping children from receiving sexualized content in the past. Is the blurring of direct messages the first step toward a new beginning for the brand?
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
How will Instagram's nudity blurring feature work?
When a user receives an "image containing nudity, it will be automatically blurred under a warning screen, meaning the recipient isn't confronted with a nude image and they can choose whether or not to view it," Meta said. Instagram will additionally show the user a message "encouraging them not to feel pressure to respond, with an option to block the sender and report the chat." This feature will be automatically enabled for users under 18, while adults will receive messages encouraging them to activate it.
On the other end, people who try to send a nude image will "see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they've changed their mind," said Meta. Additionally, anyone trying to forward a nude image will receive a message urging them to reconsider.
Some people have expressed skepticism that Instagram will be able to tell whether an image is explicit, but Meta said that that its new tool "uses on-device machine learning to analyze whether an image sent in a DM on Instagram contains nudity." This also means that "nudity protection will work in end-to-end encrypted chats, where Meta won't have access to these images."
The blurring feature will be tested in the coming weeks with a global rollout in the next few months, said NBC News.
How will this change children's interactions on Instagram?
Some say it won't. While users can block a sender and report their chat if they receive a nude, the situation on Instagram won't change "until there is a way for a teen to say they've received an unwanted advance, and there is transparency about it," Arturo Béjar, the former engineering director of Meta, said to The Associated Press.
Béjar, who has testified to Congress about his experience working at Meta, said the "tools announced can protect senders, and that is welcome. But what about recipients?" Béjar told the AP he had gathered evidence that 1 in 8 teens receive an unwanted advance on Instagram every week, asking, "what can they do if they get an unwanted nude?"
There is still ambiguity over how well these tools will work — especially because this is not the first time Meta has tried to curb online exploitation. The company "has had long-standing policies that ban people from sending unwanted nudes or seeking to coerce others into sharing intimate images," TechCrunch said. But even with these rules in place, it "doesn't stop these problems from occurring and causing misery for scores of teens and young people."
There is also the fact that even with these tools at users' disposal, predatory accounts remain online. Meta is trying to change that, saying in its press release that it is using technology to "help identify where accounts may potentially be engaging in sextortion scams, based on a range of signals that could indicate sextortion behavior." While this sounds like a net positive, it is "not clear what technology Meta is using to do this analysis, nor which signals might denote a potential sextortionist," said TechCrunch. The company is also taking steps such as hiding the "message" button on profiles deemed potentially extortionist, though the validity of these efforts remains to be seen.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Justin Klawans has worked as a staff writer at The Week since 2022. He began his career covering local news before joining Newsweek as a breaking news reporter, where he wrote about politics, national and global affairs, business, crime, sports, film, television and other news. Justin has also freelanced for outlets including Collider and United Press International.
-
Colleges are canceling affinity graduations amid DEI attacks but students are pressing on
In the Spotlight The commencement at Harvard University was in the news, but other colleges are also taking action
-
When did computer passwords become a thing?
The Explainer People have been racking their brains for good codes for longer than you might think
-
What to know before 'buying the dip'
the explainer Purchasing a stock once it has fallen in value can pay off — or cost you big
-
When did computer passwords become a thing?
The Explainer People have been racking their brains for good codes for longer than you might think
-
Google's new AI Mode feature hints at the next era of search
In the Spotlight The search giant is going all in on AI, much to the chagrin of the rest of the web
-
How the AI takeover might affect women more than men
The Explainer The tech boom is a blow to gender equality
-
Did you get a call from a government official? It might be an AI scam.
The Explainer Hackers may be using AI to impersonate senior government officers, said the FBI
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
Elon Musk's SpaceX has created a new city in Texas
Under The Radar Starbase is home to SpaceX's rocket launch site
-
Social media: How 'content' replaced friendship
Feature Facebook has shifted from connecting with friends to competing with entertainment companies