We’re continually striving to be more proactive and open in the work we do to serve the public conversation on Twitter. Part of that effort is our biannual Twitter Transparency Report, which we’ve produced since July 2012 to share global trends across a number of areas of our enforcement on Twitter, including the Twitter Rules and legal requests we receive.
The report is ever-evolving. For the first time, we’re incorporating data and insights regarding impersonation policy enforcement, as well as state-backed information operations datasets that were previously released to the public to empower research and awareness of these campaigns.
Since the last Twitter Transparency Report, we’ve continued to further invest in proactive technology to positively and directly impact people’s conversations on the service.
Here are key highlights from that work, which relate to the latest reporting period (January 1 to June 30, 2019)*:
*All figures compared to the last reporting period.
Twitter Rules enforcement
Our continued investment in proprietary technology is steadily reducing the burden on people to report to us. For example, more than 50% of Tweets we take action on for abuse are now being surfaced using technology. This compares to just 20% a year ago.
Additionally, due to a combination of our increased focus on proactively surfacing potentially violative content for human review and the inclusion of impersonation data for the first time, we saw a 105% increase in accounts locked or suspended for violating the Twitter Rules.
Specific policy content areas:
We also continue to focus on deterring potentially spammy accounts at the time of account creation; often before their first Tweet. However, our enforcement actions tend to fluctuate for a variety of reasons, often due to the type of spam.
This reporting period, our anti-spam challenges — where we ask people to provide a phone number or email address or fill in a ReCAPTCHA code to verify there is a human behind an account — fell by nearly 50%.
Removal of terrorist content
A total of 115,861 accounts were suspended for violations related to the promotion of terrorism this reporting period, down 30% from the previous reporting period. This continues a year-on-year decrease in the number of accounts promoting terrorist content on Twitter as we take more comprehensive enforcement action using our technology and strengthen partnerships with our peers. Of those suspensions, 87% consisted of accounts we proactively flagged using internal, proprietary removal tools.
Removal of child sexual exploitation content
During this reporting period, we suspended a total of 244,188 accounts for violations related to child sexual exploitation. Of the unique accounts suspended, 91% were surfaced by a combination of technology solutions (including PhotoDNA and internal, proprietary tools).
In addition to enforcing the Twitter Rules, we also may take action in response to legal requests.
We received a 101% increase in DMCA takedown notices since our last report. However, many were incomplete or not actionable. We continue to see a high volume of fraudulent DMCA reports from Turkey and Japan, while fraudulent reports from Brazil also continue to increase.
We saw a 39% increase in the total number of trademark notices received since our last report. The increase is likely due to an influx of reports that failed to provide sufficient information to take any action on our part.
Information requests from the United States continue to make up the highest percentage of legal requests for account information. During this reporting period, 29% of all global requests for account information originated within the United States.
At this time we are only able to share information about the number of National Security Letters (NSLs) received which are no longer subject to non-disclosure orders. We believe it is much more meaningful to publish these actual numbers than reporting in the bands authorized per the USA Freedom Act. Our litigation in the case Twitter v. Barr continues.
During this reporting period, we notified people affected by three additional NSLs after the gag orders were lifted. As reflected in the report, non-disclosure orders for 17 total NSLs have been lifted to date. Twitter is committed to continuing to use the legal mechanism available to us to request a judicial review of these gag orders.
Compared to the previous reporting period, we received roughly 67% more legal requests to remove content, originating from 49 different countries. Of the requests received, 80% of the volume originated from Japan, Russia, and Turkey. We withheld content in a country 2,457 times, at either an account or Tweet level.
*We continue to publish these legal requests when we take action directly to the Lumen Database, a partnership with Harvard’s Berkman Klein Center for Internet & Society.
We remain deeply committed to transparency at Twitter — it continues to be one of our key guiding principles. This commitment is reflected in the evolution and expansion of the report in recent years: It now includes dedicated sections on platform manipulation and spam, our Twitter Rules enforcement, and state-backed information operations we’ve previously removed from the service.
This report reflects not only the evolution of the public conversation on our service but the work we do every day to protect and support the people who use Twitter. Follow @Policy and @TwitterSafety for relevant updates, initiatives or announcements regarding our efforts.
Did someone say … cookies?