The Twitter Rules apply to everyone who uses Twitter. In the past, we’ve created our rules with a rigorous policy development process; it involves in-depth research and partnership with the members of our Trust and Safety Council and other experts to ensure these policies best serve every person on the service. Now, we’re trying something new by asking everyone for feedback on a policy before it’s part of the Twitter Rules.
For the last three months, we have been developing a new policy to address dehumanizing language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence. Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease), but there are still Tweets many people consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy public conversation.
With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target. Many scholars have examined the relationship between dehumanization and violence. For example, Susan Benesch has described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable, and Herbert Kelman has posited that dehumanization can reduce the strength of restraining forces against violence.
We want your feedback to ensure we consider global perspectives and how this policy may impact different communities and cultures. For languages not represented here, our policy team is working closely with local non-governmental organizations and policy makers to ensure their perspectives are captured.
Below you’ll find a quick survey, which will be available until Tuesday, October 9, at 6:00am PST. Once the feedback form has closed, we will continue with our regular process, which passes through a cross-functional working group, including members of our policy development, user research, engineering, and enforcement teams. We will share some of what we learn when we update the Twitter Rules later this year.
This is part of our singular effort to increase the health of the public conversation on our service and we hope this gives you a better understanding of how new rules are created. We want you to be part of this process, let us know what you think in the form below.
Twitter’s Dehumanization Policy
You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.
Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of their human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to a tool for some other purpose (mechanistic).
Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.
Examples of dehumanization:
Thank you for submitting your feedback. The submission form is now closed.
Did someone say … cookies?