Company

Insights from the 17th Twitter Transparency Report

By
Monday, 11 January 2021

Meaningful transparency between companies, regulators, civil society, and the general public is fundamental to the work we do at Twitter — this transparency is a key tenet of our efforts to preserve and protect the Open Internet. In line with this philosophy, in August we launched our new Twitter Transparency Center to make our data easier to understand and analyze for those who access our biannual Twitter Transparency Report. 

Our latest Twitter Transparency Report includes data from January 1, 2020, through June 30, 2020

COVID-19

The COVID-19 pandemic severely impacted business operations for all of us around the world. Given the changes in workflows, coupled with country specific COVID-19 restrictions, there was some significant and unpredictable disruption to our content moderation work and the way in which teams assess content and enforce our policies - a disruption that is reflected in some of the data presented today. We increased our use of machine learning and automation to take a wide range of actions on potentially abusive and misleading content, whilst continually focusing human review in areas where the likelihood of harm was the greatest.

In March, we launched a COVID-19 misleading information policy to further protect the health of the public conversation. During this reporting period, our teams took enforcement action against 4,658 accounts for violations of this policy. As we’ve further invested in technology, our automated systems challenged 4.5 million accounts that were targeting discussions around COVID-19 with spammy or manipulative behaviors. 

Our work on information operations

Twitter discloses state-backed actors’ attempts to disrupt the conversation on the service. During this reporting period, we took action on more than 52,000 accounts that we reliably attributed to information operations originating within China, Russia, Turkey, Serbia, Honduras, Egypt, Indonesia, Ghana and Nigeria as well as a KSA-affiliated actor

Platform manipulation

We continued our zero-tolerance approach to platform manipulation and any other attempts to undermine the integrity of our service. During this latest reporting period, our teams saw a 54% increase in anti-spam challenges — an increase that is due in part to the proactive measures we put in place to protect the conversation around COVID-19. We also saw a 16% increase in the number of spam reports, compared to the last reporting period. 

Terrorism & violent extremism

There was a 5% increase in the number of accounts removed for violations of our terrorism and violent extremism policies during this reporting period — 94% of those accounts were proactively identified. Our current methods of surfacing potentially violating content for human review include leveraging the shared industry hash database supported by the Global Internet Forum to Counter Terrorism (GIFCT).

Child sexual exploitation

We do not tolerate child sexual exploitation (CSE) on Twitter. CSE material is removed from the service without further notice and reported to The National Center for Missing & Exploited Children (NCMEC). As we have expanded our teams and increased operational capacity in this area, we saw a 68% increase in our enforcement under our Child Sexual Exploitation Policy.

Copyright and trademark

Under our Copyright Policy, we received 15% more Digital Millennium Copyright Act (DMCA) takedown notices affecting 87% more accounts during this reporting period. Under our Trademark Policy, our trademark notice compliance decreased by 30% during this time.

Twitter Rules enforcement

Targeted harassment of someone, or inciting other people to do so, is against the Twitter Rules. There was a 34% decrease in the number of accounts actioned for violations of our abuse policy.

We saw a steady increase in the number of accounts actioned under our Civic Integrity Policy, as elections happened around the world during this reporting period. There was a 37% increase in the number of accounts actioned for violations of this policy during this reporting period.

Over the six month reporting period and amidst the COVID-19 disruptions to workflow, we saw a 35% decrease in the number of accounts actioned under our Hateful Conduct Policy. In March 2020, our Hateful Conduct Policy expanded to cover new facets of our dehumanization guidance, specifically prohibiting language that dehumanizes people on the basis of age, disability, and disease.

We do not permit people to promote, advocate, and persuade another individual to engage in self-harm or suicide. There was a 49% decrease in the number of accounts actioned for violations of our suicide or self-harm policy. 

We have clear rules around the sharing of private information on our service. During this reporting period, we continued to see an upward trend in our enforcement under this policy — up by 68%. This increase was due to our proactive efforts in this area. 

Enforcement numbers for non-consensual nudity (NCN) on the service experienced a 58% decrease. We'll continue working to improve processes and models to be as proactive as possible in maintaining a healthy environment for the people on Twitter.

Information requests (legal requests for account information): 

  • Twitter received 12,657 legal requests for account information specifying 25,560 accounts during this period, from 68 different countries.

Removal requests (legal requests for content removal)*:

  • Twitter received 42,220 legal demands to remove content specifying 85,375 accounts during this period, from 53 different countries
  • 96% of the total global volume of requests originated from five countries: Japan, Russia, South Korea, Turkey, and India. 
  • These requests impacted approximately 13% fewer accounts compared to the previous reporting period. 
  • We received 19% more reports based on local law(s) from trusted reporters and non-governmental organizations, impacting approximately 7% more accounts, compared to the previous reporting period. 

What’s next? 

As noted throughout the report, the COVID-19 pandemic did significantly disrupt our content moderation work during this time - a disruption that is reflected in much of the data presented today. Our enforcement teams have adjusted their approach in the context of a pandemic and are continuing to increase their capacity to get back to the strong pre-COVID levels of enforcement that is expected.

There will always be more work to do in this space, and we’ll continue to provide biannual Twitter Transparency Reports that offer more clarity into our operations and work to protect the public conversation. 

We also recognize the importance of measuring prevalence of certain content on Twitter, and we have begun a multi-year initiative to enable us to provide more consistent transparency on these issues. We look forward to sharing more details in due course. 

Follow @Policy and @TwitterSafety for updates on our policies and our work on transparency throughout the year. 

*Unless prohibited from doing so, we continue to publish these legal requests when we take action directly to the Lumen Database, a partnership with Harvard’s Berkman Klein Center for Internet & Society.

This post is unavailable
This post is unavailable.