Twitter is promising to help users "limit unwelcome interactions" with a new feature.
The social media platform, which has long faced calls to take further steps to crack down on harassment and abuse, on Wednesday announced it's testing "Safety Mode." This feature, when turned on, will automatically block accounts for seven days if they use "potentially harmful language" or send "repetitive and uninvited replies or mentions," Twitter said.
A video announcement shared by Twitter said the feature would help users to "autoblock spammy or abusive replies," and Twitter Senior Product Manager Jarrod Doherty described this as part of an effort to "better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks." According to Twitter, the feature will recognize "existing relationships" so as not to block accounts that users follow or interact frequently with.
The feature comes after Twitter said in July it was testing what was effectively a dislike button for replies, and the company also previously said it would be looking at new features to fight abuse and allow users to "control unwanted attention." Twitter's "Safety Mode" feature is currently only available to a "small feedback group," but it wasn't clear when it might roll out to more users.