Transparency is a key guiding principle in our mission to serve the public conversation. For the past seven years, our biannual Twitter Transparency Report has highlighted trends in requests made to Twitter from around the globe. Over time, we have significantly expanded the information we disclose adding metrics on platform manipulation, Twitter Rules enforcement, and our proactive efforts to eradicate terrorist content, violent extremism, and child sexual exploitation from our service.
We strongly believe in providing data and information that is straightforward and provides insights into the types of requests we receive from governments and others around the world. This is why we also upload legal requests directly into Lumen database, a long-term partnership with the Berkman Klein Center at Harvard. We encourage the public to explore this site to get a sense of the day-to-day queries we receive from governments and other entities around the world.
Freedom of expression is the cornerstone of why we exist, and we hope that the larger and more granular datasets published in our Transparency Report provide you with information that can help you understand the political and social contours of how nation-states and their institutions interact with our company. We believe it is vital that the public see the demands we receive, and how we work to strike a balance between respecting local law, letting the Tweets flow, and protecting people from harm. We will continue to update our report with new data and evolve our commitment in this space, particularly around the Twitter Rules and our own enforcement.
The below data points highlight some of the most important and interesting trends we’ve observed during this reporting period (July-December 2018).
Global information requests (legal requests for account information):
Global removal requests (legal requests for content removal):
Removal of terrorist content
During this reporting period, a total of 166,513 accounts were suspended for violations related to promotion of terrorism, which is a reduction of 19% from the volume shared in the previous reporting period. Of those suspensions, 91% consisted of accounts flagged by internal, purpose-built technological tools. The trend we are observing year-on-year is a steady decrease in terrorist organizations attempting to use our service. This is due to zero-tolerance policy enforcement that has allowed us to take swift action on ban evaders and other identified forms of behavior used by terrorist entities and their affiliates. In the majority of cases, we take action at the account setup stage — before the account even Tweets. We are encouraged by these metrics but will remain vigilant. Our goal is to stay one step ahead of emergent behaviors and new attempts to circumvent our robust approach.
Removal of child sexual exploitation
During this reporting period, we suspended a total of 456,989 unique accounts for violations related to child sexual exploitation, which is down 6% from the volume disclosed in the previous reporting period. Of those unique accounts suspended, 96% were surfaced by a combination of technology solutions, including PhotoDNA and internal proprietary tools. As standard and required by law, we continue to report to the National Center for Missing and Exploited Children (NCMEC). Alongside our other safety partners worldwide, NCMEC continue to play a critical role and we deeply value and appreciate the partnership.
Proactive challenges of accounts for spammy behavior and platform manipulation have decreased by 17% in the second half of 2018 versus the first half, totalling 194 million challenges in the second half of 2018. Approximately 75% were subsequently automatically removed after failing our account challenge process. This is due to a range of factors, including our increased emphasis on detection of malicious activity at signup — stopping bad actors from ever getting to the stage of Tweeting — and positive external trends affecting the volume of this activity targeting Twitter. Aggregate reports of these types of behavior have also decreased in the second half of 2018, suggesting that people continue to experience fewer spammy interactions on Twitter.
National security requests
As in past reports, Twitter is only able to publish very limited information about national security requests due to legal prohibitions that we continue to challenge in court (see here for a full update on Twitter v. Barr, our ongoing transparency litigation). At this time we are able to share information about the number of National Security Letters (NSLs) received which are no longer subject to non-disclosure orders. We believe it is much more meaningful to publish these actual numbers than reporting in the bands authorized per the USA Freedom Act.
During this reporting period we notified users affected by two additional NSLs after the gag orders were lifted. As reflected in the report, non-disclosure orders for 14 total NSLs have been lifted to date. Twitter is committed to continuing to use the legal mechanism available to us to request judicial review of these gag orders. More broadly, we are also committed to arguing that indefinite non-disclosure orders are unconstitutional in both the criminal and national security contexts. We view each request for judicial review as an opportunity to strengthen the legal precedent protecting our First Amendment rights.
Twitter Rules enforcement
Across the six Twitter Rules policy categories included in this report, 6,388 accounts were reported by known government entities compared to 5,461 reported during the last reported period, an increase of 17%. We have a global team that manages enforcement of our Rules with 24/7 coverage, in every supported language on the service. It is worth noting that the raw number of reported accounts is not a consistent indicator of the validity of the reports we receive. During our review process, we may consider whether reported content violates aspects of the Twitter Rules beyond what was initially reported. For example, content reported as a violation of our private information policy may also be a violation of our policies for hateful conduct. If the content is determined to violate any Twitter Rule, it is actioned accordingly. Not all reported accounts are found to violate the Twitter Rules, and reported accounts may be found to violate a different rule than was initially reported. We may also determine that reported content does not violate the rules at all. The volumes often fluctuate significantly based on world events, including elections, national and international media stories, and large conversational moments in social and political culture.
We produce the Twitter Transparency Report to inform the public about the actions we take and the requests we receive from governments around the world. As the public discussion about regulation increases, we believe transparency is an essential part of ensuring you — the public — are able to see how these laws operate. Transparency is not just the responsibility of tech companies. Governments and regulators should be transparent about their own actions, enabling people to know if content has been removed because of a decision Twitter made, or because of a government request. This transparency is essential if we are to foster an informed debate and mitigate the risk of inappropriate use of state power.
*A note about our previous report: We regularly review, refine, and iterate upon the data we collect and evaluate for the Twitter Transparency Report. Since we published our last report, we have moved to a different data source for tracking the number of spam and platform manipulation reports we receive from people who use Twitter. In our last report (covering January-June 2018), we shared that we received 4,020,893 spam reports. Based on our updated data source, we now believe that 3,606,533 reports the accurate number. Today we reflect these corrections as a revision to our previous report’s data.
Did someone say … cookies?