We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* shows that dehumanising language increases that risk. As a result, after months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanises others on the basis of religion.
If reported, Tweets that break this rule sent before today will need to be deleted, but will not directly result in any account suspensions because they were Tweeted before the rule was set.
Why start with religious groups?
Last year, we asked for feedback to ensure we considered a wide range of perspectives and to hear directly from the different communities and cultures who use Twitter around the globe. In two weeks, we received more than 8,000 responses from people located in more than 30 countries.
Some of the most consistent feedback we received included:
Through this feedback, and our discussions with outside experts, we also confirmed that there are additional factors we need to better understand and be able to address before we expand this rule to address language directed at other protected groups, including:
We’ll continue to build Twitter for the global community it serves and ensure your voices help shape our rules, product, and how we work. As we look to expand the scope of this change, we’ll update you on what we learn and how we address it within our rules. We’ll also continue to provide regular updates on all of the other work we’re doing to make Twitter a safer place for everyone @TwitterSafety.
*Examples of research on the link between dehumanising language and offline harm: