Product

Introducing Safety Mode

By
Wednesday, 1 September 2021

Feeling safe on Twitter looks different for everyone. We’ve rolled out features and settings that may help you to feel more comfortable and in control of your experience, and we want to do more to reduce the burden on people dealing with unwelcome interactions.

Unwelcome Tweets and noise can get in the way of conversations on Twitter, so we’re introducing Safety Mode, a new feature that aims to reduce disruptive interactions. Starting today, we’re rolling out this safety feature to a small feedback group on iOS, Android, and Twitter.com, beginning with accounts that have English-language settings enabled.

Here’s how it works

Safety Mode is a feature that temporarily blocks accounts for seven days for using potentially harmful language — such as insults or hateful remarks — or sending repetitive and uninvited replies or mentions. When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier. Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be autoblocked.

This post is unavailable
This post is unavailable.

Authors of Tweets found by our technology to be harmful or uninvited will be autoblocked, meaning they’ll temporarily be unable to follow your account, see your Tweets, or send you Direct Messages.

You can find information about the Tweets flagged through Safety Mode and view the details of temporarily blocked accounts at any time. Before each Safety Mode period ends, you’ll receive a notification recapping this information. We won’t always get this right and may make mistakes, so Safety Mode autoblocks can be seen and undone at any time in your Settings. We’ll also regularly monitor the accuracy of our Safety Mode systems to make improvements to our detection capabilities.

How we got here

We want you to enjoy healthy conversations, so this test is one way we're limiting overwhelming and unwelcome interactions that can interrupt those conversations. Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks.

Throughout the product development process, we conducted several listening and feedback sessions for trusted partners with expertise in online safety, mental health, and human rights, including members of our Trust and Safety Council. Their feedback influenced adjustments to make Safety Mode easier to use and helped us think through ways to address the potential manipulation of our technology. These trusted partners also played an important role in nominating Twitter account owners to join the feedback group, prioritizing people from marginalized communities and female journalists.

This post is unavailable
This post is unavailable.

“As members of the Trust & Safety Council, we provided feedback on Safety Mode to ensure it entails mitigations that protect counter-speech while also addressing online harassment towards women and journalists. Safety Mode is another step in the right direction towards making Twitter a safe place to participate in the public conversation without fear of abuse.”

ARTICLE 19

a human rights organization that champions digital rights and equality

‎@article19org‎

We also committed to the World Wide Web Foundation’s framework to end online gender-based violence and participated in a series of discussions to explore new ways for women to customize their experience with safety online using features like Safety Mode.

What’s next 

We’ll observe how Safety Mode is working and incorporate improvements and adjustments before bringing it to everyone on Twitter. Stay tuned for more updates as we continue to build on our work to empower people with the tools they need to feel more comfortable participating in the public conversation.

This post is unavailable
This post is unavailable.