Transparency drives the work we do at Twitter and underpins our efforts to promote and protect the Open Internet around the world. Over the past year, we’ve experienced and continue to navigate severe global challenges, including the coronavirus pandemic. We’ve also seen concerted attempts by governments to limit access to the Internet generally and to Twitter specifically.
Since 2012, the Twitter Transparency Report has offered a window into our work to enforce the Twitter Rules, protect privacy, navigate increased requests from governments around the world, disrupt state-backed information operations, and more. Now, amid an increasingly complex global landscape, continued transparency around our own efforts to protect the public conversation is paramount.
Our latest Twitter Transparency Center update includes data from July 1 to December 31, 2020. While we continue to share data across consistent, recurring categories, we’re introducing new data that can provide meaningful insights into the impact of our actions. Our new impressions metric captures the number of views violative Tweets received prior to removal. We’re also including more information about the adoption of two-factor authentication, part of our work to keep accounts safe and secure.
As we’ve noted previously, Twitter’s operations were severely impacted by the COVID-19 pandemic during the latter half of 2020, as was the case with the prior reporting period. Varying country-specific COVID-19 restrictions and adjustments within our teams affected the efficiency of our content moderation work and the speed with which we enforced our policies. We increased our use of machine learning and automation to take a wide range of actions on potentially misleading and manipulative content. Like many organizations – both public and private around the world – the disruptions caused by COVID-19 made an impact on our company and are reflected in some of the data shared today.
We’re committed to enabling safe and healthy conversations on the service, and we’re always looking for ways to share more context about our enforcement of the Twitter Rules. Our impressions metric captures the number of views a violative Tweet received prior to removal.
In total, impressions on violative Tweets accounted for less than 0.1% of all impressions for all Tweets globally, from July 1 through December 31. During this time period, Twitter removed 3.8 million Tweets that violated the Twitter Rules; 77% of which received fewer than 100 impressions prior to removal, with an additional 17% receiving between 100 and 1,000 impressions. Only 6% of removed Tweets had more than 1,000 impressions.
More broadly, as we work to remove harmful, violative content quickly and at scale, whether amid a global health crisis or during a high stakes election, these numbers represent both our present efficiency and where improvement is needed. Our goal is to improve these numbers over time, taking enforcement action on violative content before it’s even viewed.
As the COVID-19 pandemic evolves around the world, we continue to take enforcement action on misleading information about COVID-19, particularly that which puts people at risk and could lead to harm.
From July to December 2020, we challenged 10,320,924 accounts. The account challenge number represents a large volume of proactive anti-spam challenges we issued, targeting platform manipulation focused on COVID-19 discussions. We rolled out a set of proactive challenges specifically focused on COVID-19. We suspended 597 accounts, and removed 3,846 pieces of content. Since introducing our COVID-19 guidance last year, to the time of this publication, we have challenged 11.7 million accounts, suspended 1,496 accounts, and removed over 43,010 pieces of content worldwide.
You can learn more about our COVID-19 misinformation policy here.
Global legal requests
Protecting the privacy of people who use Twitter is important to us. We produced some or all of the requested information in response to 30% of these information requests; 4,367 total.
Removal Requests (legal demands for content removal)
During this reporting period, Twitter received 38,524 legal demands to remove content specifying 131,933 accounts. We withheld or otherwise removed some or all of the reported content in response to 29% of these global legal demands; 11,091 total.
Our reporting processes are designed to be transparent and to enable real accountability. Where possible, we provide user notice when we receive these requests. Importantly, unless we are prohibited from doing so, when we remove or withhold content in a certain country, Twitter will provide a copy of the request to the publicly available Lumen Database. When content is withheld, it is only withheld in the country making the removal demand and remains visible in all other jurisdictions.
Twitter Rules enforcement
We continue to step up the level of proactive enforcement across the service and invest in technological solutions to respond to ever-evolving malicious online activity. Today, by using technology, 65% of the abusive content we action is surfaced proactively for human review, instead of relying on reports from people using Twitter.
We launched a number of initiatives, like more precise machine learning, to better detect and take action on content that violated this policy. As a result, there was a 142% increase in accounts actioned*, compared to the previous reporting period; 964,459 in total.
We do not tolerate child sexual exploitation (CSE) on Twitter. In the majority of cases, the consequence for violating our CSE policy is immediate and permanent suspension, and violators are prohibited from creating any new accounts in the future. Violative content is removed and reported to the National Center for Missing & Exploited Children (NCMEC).
There was a 6% increase in the number of accounts suspended for violations of our child sexual exploitation policy during this reporting period. In total, 464,804 unique accounts were suspended, 90% of which were proactively identified by employing internal proprietary tools and industry hash sharing initiatives.
We took enforcement action on 27,087 accounts containing non-consensual nudity, an increase of 194% from the prior reporting period. From July to December 2020, we saw the largest increase in the number of accounts actioned under this policy.
*"Accounts actioned" reflects the number of unique accounts that were suspended or required content removal for violating the Twitter Rules.
There was a 35% decrease in the number of accounts permanently suspended for violations of our terrorism and violent extremism policies – 96% of those accounts were proactively identified using a combination of machine learning and human review. Action was taken on 58,750 unique accounts under this policy during this reporting period.
We continue to utilize and contribute to the shared industry hash database supported by the Global Internet Forum to Counter Terrorism (GIFCT).
Civic integrity policy enforcements increased significantly, by 175%, compared to the previous reporting period, largely driven by Tweets related to the United States 2020 election.
During the United States 2020 election, we enacted a set of policy, enforcement and product changes to add context, encourage thoughtful consideration, and reduce the potential for misleading information to spread on Twitter. From October 27 to November 11, for example, we labeled approximately 300,000 Tweets that contained claims that were disputed and potentially misleading.
There was a 77% increase in the number of accounts actioned for violations of our hateful conduct policy during this reporting period – from 635,415 accounts, to 1,126,990 accounts.
In September 2020, we began enforcing our hateful conduct policy against content that incites fear and/or fearful stereotypes about protected categories, as we were seeing increased harassment of some protected categories during the COVID-19 pandemic. In December 2020, we further expanded our hateful conduct policy to include content that dehumanizes on the basis of race, ethnicity, or national origin.
From July to December 2020, we actioned 188,561 accounts compared to 64,610 during the previous reporting period, accounting for a 192% increase in enforcement.
Suicide and self-harm are significant social and public health challenges, and we recognize that we have a responsibility to help people access support when they need it. We continue to offer resources to support those experiencing thoughts of self-harm or suicide, like our #ThereIsHelp search prompt. We’ve introduced new initiatives to better detect and take action on content that violates our policy on suicide and self-harm.
Anti-spam challenges increased by approximately 6% compared to the previous reporting period. In addition, we observed an approximate 14% decrease in the number of spam reports from the previous reporting period.
Submissions of Digital Millennium Copyright Act takedown notices decreased by 2%, and we saw a 44% decrease in accounts affected. Tweets withheld and media withheld also dropped by 25% and 43% respectively, as Twitter’s operations were affected due to the COVID-19 pandemic and a security incident in July 2020.
We continue to encourage two-factor authentication (2FA) for all accounts, and a number of other best practices to help keep accounts safe and secure. Over time, we hope to see more people on Twitter adopt enhanced security measures for their accounts.
Approximately 2.3% of accounts have enabled at least one 2FA method during the reporting period.
We’re committed to increasing our transparency and improving our accountability to you, the public, and we’ll continue to publish updates to the Twitter Transparency Center on a biannual basis.
Did someone say … cookies?