Product

Tweeting with consideration

By and
Wednesday, 5 May 2021

People come to Twitter to talk about what's happening, and sometimes conversations about things we care about can get intense and people say things in the moment they might regret later. That’s why in 2020, we tested prompts that encouraged people to pause and reconsider a potentially harmful or offensive reply before they hit send.

Based on feedback and learnings from those tests, we’ve made improvements to the systems that decide when and how these reminders are sent. Starting today, we’re rolling these improved prompts out across iOS and Android, starting with accounts that have enabled English-language settings.

This post is unavailable
This post is unavailable.

How we got here

We began testing prompts last year that encouraged people to pause and reconsider a potentially harmful or offensive reply — such as insults, strong language, or hateful remarks — before Tweeting it. Once prompted, people had an opportunity to take a moment and make edits, delete, or send the reply as is.

In early tests, people were sometimes prompted unnecessarily because the algorithms powering the prompts struggled to capture the nuance in many conversations and often didn't differentiate between potentially offensive language, sarcasm, and friendly banter. Throughout the experiment process, we analyzed results, collected feedback from the public, and worked to address our errors, including detection inconsistencies.

These tests ultimately resulted in people sending less potentially offensive replies across the service, and improved behavior on Twitter. We learned that: 

  • If prompted, 34% of people revised their initial reply or decided to not send their reply at all.
  • After being prompted once, people composed, on average, 11% fewer offensive replies in the future.
  • If prompted, people were less likely to receive offensive and harmful replies back.

    Since the early tests, here’s what we’ve incorporated into the systems that decide when and how to send these reminders:

  • Consideration of the nature of the relationship between the author and replier, including how often they interact. For example, if two accounts follow and reply to each other often, there’s a higher likelihood that they have a better understanding of preferred tone of communication.
  • Adjustments to our technology to better account for situations in which language may be reclaimed by underrepresented communities and used in non-harmful ways.
  • Improvement to our technology to more accurately detect strong language, including profanity.
  • Created an easier way for people to let us know if they found the prompt helpful or relevant.
This post is unavailable
This post is unavailable.

What’s next

We’ll continue to explore how prompts — such as reply prompts and article prompts — and other forms of intervention can encourage healthier conversations on Twitter. Our teams will also collect feedback from people on Twitter who have received reply prompts as we expand this feature to other languages. Stay tuned for more updates as we continue to learn and make new improvements to encourage more meaningful conversations on Twitter. 

This post is unavailable
This post is unavailable.