Company

Introducing the new Twitter Transparency Center

By
Wednesday, 19 August 2020

Transparency is core to the work we do at Twitter.

The open nature of our service has led to unprecedented challenges around protecting freedom of expression and privacy rights as governments around the world increasingly attempt to intervene in this open exchange of information. We believe that transparency is a key principle in our mission to protect the Open Internet, and advancing the Internet as a global force for good.

This post is unavailable
This post is unavailable.
This post is unavailable.

The fundamental belief in the power of open, public conversation inspired Twitter to launch one of the industry's first transparency reports back in 2012. At that time, our goal was to provide the public with regular insights into government pressure that impacted the public, whether through overt attempts at political censorship or by way of soliciting account data through information requests.

The world has changed significantly since 2012. In 2020, it is more important than ever that we shine a light on our own practices, including how we enforce the Twitter Rules, our ongoing work to disrupt global state-backed information operations, and the increased attempts by governments to request information about account holders.

This post is unavailable
This post is unavailable.

Our new Twitter Transparency Center

We have reimagined and rebuilt our biannual Twitter Transparency Report site to become a comprehensive Twitter Transparency Center. Our goal with this evolution is make our transparency reporting more easily understood and accessible to the general public.

This post is unavailable
This post is unavailable.

What’s new? 

  • Brand new website that includes all our disclosed data in one place 
  • Data visualizations making it easier to compare trends over time 
  • Country comparison module
  • Tooltips to help explain key terms and provide more insights on the terms we use
  • History of transparency milestones and updates
  • New metrics and methodology on the enforcement of the Twitter Rules (from July 2018 through December 2019)
  • New policy categories to better align with the Twitter Rules
This post is unavailable
This post is unavailable.

Reports will be published in Arabic, Turkish, Spanish, German, French, Japanese, and Portuguese very soon too, and we are continuing to iterate on the process to further contextualize the data.

Our work to increase transparency efforts across the company is tireless and constant. We will continue to work on increasing awareness and understanding about how our policies work and our practices around content moderation, data disclosures and other critical areas. In addition, we will take every opportunity to highlight the actions of law enforcement, governments, and other organizations that impact Twitter and the people who use our service across the world.

This post is unavailable
This post is unavailable.

The data

The latest data reflects the period July 1 to December 31, 2019. We endeavour to release this material as soon as possible every six months but due to a number of factors, including the COVID-19 pandemic and getting the new Twitter Transparency Center up and running, we have faced delays. The next update to the data will cover the period of January - June 2020. 

Our work on information operations:

Our archive of state-backed information operations is updated on a rolling basis after we identify and remove them from Twitter. We have also increased our cadence of disclosures, recently sharing our largest disclosure to date with 32,242 accounts added to the archive.

This archive, used by researchers, journalists and experts around the world, now spans more than 9 terabytes of media, includes over 83,000 accounts, and over 200 million Tweets and is an industry-first resource. We’ve now released datasets of information operations originating in more than 15 countries, offering researchers unique insight into how information operations unfold on the service.

We’re also expanding how we work with partners in the research community to improve understanding of information operations and disinformation. Earlier this year we strengthened our partnership with two leading research institutions — the Australian Strategic Policy Institute (ASPI) and the Stanford Internet Observatory — to enable their analysis and review of data related to our disclosures. 

We also hosted our first ever #InfoOps2020 conference in partnership with Carnegie's Partnership for Countering Influence Operations. The event brought together academic experts, industry, and government to discuss opportunities for collaboration and research on IO and support an open exchange of ideas between Twitter and the research community.

This post is unavailable
This post is unavailable.
This post is unavailable.

Platform Manipulation:

Our blog from earlier this year gives a thorough explanation about our proactive work to counter platform manipulation across the service and the common misconceptions around so-called ‘bots’ on Twitter. Our policies in this area focus on behaviour, not content, and are written in a way that targets the spammy tactics different people or groups could use to try to manipulate the public conversation on Twitter.  

Continuing a year-on-year trend, our proactive detection of this behavior has resulted in an almost 10% reduction in anti-spam challenges, e.g. when we ask people to provide a phone number or email address or fill in a ReCAPTCHA code to verify there is a human behind an account. 

Terrorism & violent extremism:

The Twitter Rules prohibit the promotion of terrorism and violent extremism. Action was taken on 86,799 unique accounts under this policy during this reporting period. 74% of the unique accounts were proactively suspended using our internal, proprietary tools. We continue our close partnership with our peers as part of the Christchurch Call to Action and are committed to eradicating the presence of violent extremist content across our respective services. 

Child sexual exploitation:

We do not tolerate child sexual exploitation on Twitter. Child sexual exploitation (CSE) including links to images of or content promoting child exploitation, is removed from the site without further notice and reported to The National Center for Missing & Exploited Children (NCMEC). People can report content that appears to violate the Twitter Rules regarding Child Sexual Exploitation via our web form and we also investigate other reports via various reporting flows in-app for CSE too. There were 257,768 unique accounts suspended during this reporting period for violating Twitter policies prohibiting child sexual exploitation. 84% of those unique accounts were proactively suspended using a combination of technologies (including PhotoDNA and internal, proprietary tools).

Twitter Rules enforcement:

For the first time, we are expanding the scope of this section to better align with the Twitter Rules, and sharing more granular data on violated policies. This is in line with best practices under the Santa Clara Principles on Transparency and Accountability in Content Moderation. 

Due to our increased focus on proactively surfacing violative content for human review, more granular policies, better reporting tools, and also the introduction of more data across twelve distinct policy areas, we have seen a 47% increase in accounts locked or suspended for violating the Twitter Rules. This work will never be stagnant and these stats should fluctuate as we improve and the challenge evolves. The increase is also reflective of a trend we’ve observed across our recent Twitter Transparency Reports, as we step up the level of proactive enforcement across the service and invest in technological solutions to respond to the changing characteristics of bad-faith activity on our service.

This post is unavailable
This post is unavailable.
  • Abuse/harassment
    As we tightened our rules and increased our use of technology and human review working in concern, there was a 95% increase in the number of accounts actioned for violations of our abuse policy during this reporting period. This reporting period saw the largest increase in the number of accounts actioned under these policies.
  • Hateful conduct
    Hateful conduct expanded to include a new dehumanization policy on July 9, 2019. There was a 54% increase in the number of accounts actioned for violations of our hateful conduct policy during this reporting period.
  • Sensitive media, including graphic violence and adult content
    There was a 39% increase in the number of accounts actioned for violations of our sensitive media policy during this reporting period.
  • Promoting suicide & self-harm
    We do not permit people to promote, advocate, and persuade another individual to engage in self-harm or suicide. There was a 29% increase in the number of accounts actioned for violations for this type of behavior. This is the first time we disclosed this data as we reprioritized how this type of egregious content can be reported to Twitter expeditiously.
  • Illegal or certain regulated goods or services
    A new addition to our data disclosures, there were 60,807 unique accounts actioned for violations of our illegal or certain regulated goods or services policy during this reporting period. 
  • Private information
    Sharing an individual’s private information — or so-called doxxing — without their express consent is a violation of the Twitter Rules. Internal tooling improvements allowed us to increase enforcement of this policy, and there was a 41% increase in the number of accounts actioned for violations of our private information policy during this reporting period.
  • Non-consensual nudity
    Due to internal improvements and extensive retraining specific to this enforcement area, this reporting period saw a 109% increase in the number of accounts actioned for violations of our non-consensual nudity policy, the largest increase in the number of accounts actioned under this policy.
  • Violent threats
    During this reporting period, we saw a 5% decrease in the number of accounts actioned for violations of our violence policies (Violent Threats and Glorification of Violence).
This post is unavailable
This post is unavailable.

Legal requests:

In addition to enforcing the Twitter Rules, we also may take action in response to legal requests.

Information requests (legal requests for account information): 

  • Governments and law enforcement agencies around the world submitted approximately 21% more information requests compared to the previous reporting period. Notably, the aggregate number of accounts specified in these requests increased by nearly 63%. The total volume of requests and specified accounts are respectively the largest we’ve seen to date since our transparency reporting began in 2012. We received government information requests from 91 different countries since 2012.
  • Information requests from the United States continue to make up the highest percentage of legal requests for account information. During this reporting period, 26% of all global requests for account information originated within the United States. The second highest volume of requests originate from Japan, comprising 22% of global information requests.
  • Anonymous and pseudonymous speech is important to Twitter and is central to our commitment to defend and protect the voices of the public. We often receive non-government information requests to disclose account information of anonymous or pseudonymous Twitter accounts (i.e., requests to “unmask” the identity of the individual), which we frequently object to. Twitter objected to 23 US civil requests for account information that sought to unmask the identities of anonymous speakers on first amendment grounds during this reporting period. We ended up litigating six of these requests. Twitter prevailed in four cases, lost one, and one is still pending. No information was produced in response to the other 17 requests.
This post is unavailable
This post is unavailable.

Removal requests (legal requests for content removal)*:

  • In this reporting period, Twitter received 27,538 legal demands to remove content specifying 98,595 accounts. This is the largest number of requests and specified accounts that we’ve received since releasing our first Transparency Report in 2012. 
  • This record number of legal demands originated from 51 different countries. 86% of the total global volume of legal demands originated from only three countries: Japan, Russia, and Turkey. 
  • Legal demands from Japan increased by 143% this reporting period, accounting for 45% of global requests received. The 12,496 requests from Japan are primarily related to laws regarding narcotics and psychotropics, obscenity, or money lending.

Copyright & trademark actions: 

  • Copyright violations: We received a 13% increase in DMCA takedown notices affecting 163% more accounts during this reporting period.
  • Trademark notices: We saw a 7% increase in the total number of trademark notices received since our last report.
This post is unavailable
This post is unavailable.

This report reflects not only the evolution of the public conversation on our service but the work we do every day to protect and support the people who use Twitter. 

Follow @Policy and @TwitterSafety for continued updates on the changes we make across the company to drive meaningful and intuitive transparency. 

*Unless prohibited from doing so, we continue to publish legal requests when we take action directly to the Lumen Database, a partnership with Harvard’s Berkman Klein Center for Internet & Society.

This post is unavailable
This post is unavailable.