We’re always exploring and testing new ways to address potentially misleading information on Twitter. As we scale our work in this space, we’re committed to drawing on feedback from the Twitter community to help us further understand the conversation and challenges around misinformation.
In August 2021, we began testing a new reporting feature for potentially misleading information in the US, South Korea, and Australia. We launched the experiment to examine if the reporting feature is an effective tool for the Twitter community to report misinformation in real time. Today, we’re expanding the test of this reporting feature to Brazil, the Philippines, and Spain.
We selected these countries because we want to learn from a small, geographically diverse set of regions — including those where English is not the primary language — before scaling globally. Additionally, alongside our long-standing policies and reporting options during civic events, the upcoming elections in Brazil, the Philippines, and the midterm elections in the US will help us to further evaluate how this reporting feature is used during civic events.
The vast majority of the content we take action on under our COVID-19 misinformation, civic integrity, and synthetic and manipulated media policies is identified proactively. Over 50% of violative content is surfaced by our automated systems and the majority of remaining content is surfaced through regular monitoring by our internal teams and our work with trusted partners. We want to understand if and how public reporting options can improve the speed and breadth of our efforts to identify potentially harmful misinformation. Since launching this test, we’ve received 3.73M reports of 1.95M distinct Tweets authored by 64K distinct accounts. We’ve used these reports in two ways:
As we continue to expand the experiment, we may not take action and cannot respond to each report.
What we’ve learned.
We’ve found that reports represent a useful, but noisy, source of information about potential violations of our rules. Of the sample of Tweets reviewed by our teams, less than 10% were violative. This compares to an average 20% to 30% violation rate for safety and abuse cases. A key driver of this low-violation rate is a high volume of “off-topic” reports.
Reports have additional benefits beyond surfacing violative content. These reporting options helped people feel more empowered. Our research also showed that people prefer using the reporting flow as opposed to interacting with a misleading Tweet through a Quote Tweet or a reply.
These findings lead us to two conclusions:
We hope this reporting feature will help our teams better understand emerging narratives and misinformation trends at scale, ultimately advancing our ability to detect misleading content on Twitter in real time. We’ll continue to use the data from this test to inform how we use misinformation reports and roll out this feature globally throughout 2022.
Did someone say … cookies?