Yesterday, we published a calendar of our safety work and many people had questions about how we determine timelines and when we’re planning to introduce changes. We write one set of rules for the hundreds of millions of people who use Twitter and the hundreds of millions of Tweets sent every day. Given that scale, we’ve built a process that gives us time to recognize the impact of what these changes mean for everyone.
Making a policy change requires in-depth research around trends in online behavior, developing language that sets expectations around what’s allowed, and reviewer guidelines that can be enforced across millions of Tweets. Once drafted, we gather feedback from our teams and Trust & Safety Council. We gather input from around the world so that we can consider diverse, global perspectives around the changing nature of online speech, including how our rules are applied and interpreted in different cultural and social contexts. We then test the proposed rule with samples of potentially abusive Tweets to measure the policy effectiveness and once we determine it meets our expectations, build and operationalize product changes to support the update. Finally, we train our global review teams, update the Twitter Rules, and start enforcing it.
We hope this helps explain our process and why it’s important to be thoughtful and deliberate about changes to our safety policies.