Twitter is testing a new option that would help users avoid tweet pile-ons by automatically blocking accounts engaging in such behavior, via a new ‘Safety Mode’ setting.
It’s essentially auto-block at scale, based on Twitter’s system assessment. When activated, the new Safety Mode would temporarily block accounts that use potentially harmful language, and/or send repetitive and uninvited replies or @mentions your way, in order to help you avoid any negative impacts.
So if you’re getting a heap of replies to a controversial tweet (intended or not), you’ll soon be able to switch on Safety Mode, and Twitter’s systems will then shield you from those mentions. And with the Twitter rage cycle tending to only last for hours at a time, it’ll likely only take a day or so for things to blow over, if you do slip-up, with the option providing a means to alleviate some of the psychological stress that can be associated with on-platform abuse.
As explained by Twitter:
“When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier. Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be autoblocked.”
Safety Mode auto blocks will remain in place for seven days, though users will be able to review and assess any blocked accounts and tweets through their settings at any stage.
Twitter’s been developing the option for some time, with the platform sharing an initial preview of the feature back in February, as part of its Analyst Day presentation.
As you can see here, in that initial demo, Twitter highlighted how the process would also alert users when their tweets were getting negative attention, enabling them to take action to avoid the responses, if they so choose. That additional measure hasn’t been announced as part of today’s announcement, but it’s another consideration Twitter’s looking into as it seeks more ways to protect users from on-platform abuse.
The question then is whether this will be a tool used for safety, or for avoidance.
As many Twitter users have pointed out, the feature could be used by those sharing controversial opinions to then seek to avoid accountability – though the line between what amounts to vigilante justice and actual reckoning in this respect is very unclear.
Really, users should have the option to back out of any discussion that they no longer feel comfortable being a part of, and really, they’ll be held to account for that action in itself, in terms of either blocking or not engaging with such, either way.
But it could provide a means for provocateurs to provoke, then run away quickly, stirring up trouble only to then avoid taking any responsibility for their actions.
Of course, we have no way of knowing for sure how the option will be used until it’s in action, and as noted, there may well be significant psychological benefit to being able to shut down a certain discussion element if it does get to be too much at any given time.
I know I’ve unwittingly waded into controversy, and been subject to a tweet pile-on before, and it can be stressful when your intentions are misconstrued and you’re being subjected to random attacks for the wrong reason.
But then again, it’s also a learning opportunity – and maybe, by avoiding such, it could lessen that value as well, which is another consideration in the process.
Either way, Twitter has developed the option on advice from mental health and safety experts, and it may well prove to be a valuable and effective addition, if it does get a full launch in future.
Twitter is now testing Safety Mode with a small group of people on iOS, Android, and Twitter.com, with expanded beta testing planned in the next few months.