This year marks a decade of transparency reporting for Twitter, and today we are publishing our 20th Transparency Report.
Why does this matter? Over the last 10 years how governments attempt to control free expression, remove content, and reveal the identity of account owners on Twitter has evolved significantly. Meaningful transparency helps people understand the rules of online services and hold governments accountable for their actions, and in turn, helps keep us accountable for principled content moderation and responsiveness to government demands. Transparency is a key guiding principle in our mission to serve the public conversation, protect the Open Internet, and advance the internet as a global force for good. We have and will continue to fight for the people who use Twitter to raise their voice.
Highlights from our latest report
We continue to see a concerning trend toward attempts to limit global press freedom, with an increase in government legal demands targeting journalists, as well as an overall increasing number of legal demands on accounts – both represent record highs since reporting began.
See the full report here.
Governments’ role in transparency reporting
This update comes at a time when government requests for account information and content removal continually hit new records, including demands to reveal the identity of anonymous account owners. This is why we continue to advocate for greater transparency from governments themselves about how these powers are used. People who use our service should know that we take a principled approach to how we handle government requests and legal demands and how we share information about people who use our service.
Globally during the latest reporting period:
From the US during the last reporting period:
Of those 29 requests, we filed lawsuits to fight back in two instances and succeeded in convincing courts to apply First Amendment protections in one case. The other case remains pending.
As in past reports, Twitter is not reporting on any other requests for information deemed to be related to national security processes because of limitations imposed on us by the U.S. government. We have been fighting for more transparency around this process for years, and continue to fight this issue in our court case, Twitter v. Garland, and are currently awaiting a decision on appeal.
We strongly believe in providing data and information that is straightforward and provides insights into the types of requests we receive from governments and others around the world. To this end, we also upload legal requests directly into the Lumen Database, a project by the Berkman Klein Center for Internet & Society at Harvard University. We encourage the public to explore this site to get a sense of the day-to-day queries we receive from governments and other entities around the world.
Investments in technology
In 2012, we were the first social media company to publish a transparency report, and since then it has become an industry standard. Over the past decade, we have made significant investments and developed our reporting including more data, insights and metrics. Since we first reported data behind our enforcement of the Twitter Rules in 2018, we’ve significantly evolved our approach to how we detect and take down content that is against our rules. The biggest impact we’ve seen in this work is through our use of technology to proactively take down content quickly, oftentimes without that content ever needing to be reported by people on Twitter.
We’re iterating on the way that we measure our effectiveness and have worked to move beyond the binary “leave up” or “take down” traditional approach to content moderation. For this reporting period, we required the removal of more than four million Tweets that violated the Twitter Rules. We also deploy less aggressive enforcement actions by labeling Tweets to add important context when the information shared doesn’t warrant Tweet deletion, and have improved the way we make certain questionable content from accounts you don’t follow don't appear in replies, search, or on your Home Timeline. In future transparency reporting, we aim to share details on these measures in more specific ways, eventually making this data core to our reporting.
One way we measure the efficacy of our investments in technology is by sharing how many impressions violative Tweets received before they are removed. For this reporting period, impressions on these violative Tweets accounted for less than 0.1% of total impressions on all Tweets. Of the Tweets removed, 71% received fewer than 100 impressions prior to removal, with an additional 21% receiving between 100 and 1,000 impressions. These numbers have remained consistent since we first began reporting this data in 2020, even as the volume of rule-violating content we remove has generally trended upward — indicating that our efforts at proactive detection are keeping pace with changing behavior. We continue to invest heavily in improving both the speed and comprehensiveness of our detections.
Getting better at tackling platform manipulation and spam
As a result of continued investment in our approach and our ongoing efforts to disrupt spam attacks on our service, for this reporting period, our teams deployed a 2% increase in global anti-spam challenges compared to the last reporting period. When we detect suspicious levels of activity, accounts may be locked and prompted to provide additional information (e.g., a phone number) or to solve a reCAPTCHA. This nominal increase is related to ongoing efforts to disrupt spam attacks on our service. Further, during the second half of 2021, we received 6% more spam reports from people using Twitter compared to the previous reporting period.
During the past 10 years, we’ve made significant investments — and seen significant progress — in how we detect and take action against spam and platform manipulation, and give people on Twitter more context in their experience. One such example is around automated accounts. Automated accounts can be a source of useful, entertaining, and relevant information on Twitter. We launched Automated Account Labels in September 2021 to make it easier for people to be able to identify these “good bots”. As of February 2022, all automated accounts globally have the option to self-identify.
To view more on our work, check out this Tweet thread from Twitter CEO Parag Agrawal:
Transparency with our data
From Twitter’s beginnings in 2006, our uniquely open and public APIs (application programming interfaces) – which, at a high level, are the way computer programs “talk” to each other so they can request and deliver information – have given researchers and developers the opportunity to tap into what is happening in the world. Twitter is the only major service to make public conversation data available via an API, for research purposes.
Open access to public data is critical for advancing research objectives on a wide range of topics in a safe way that ensures public privacy is protected. It raises general awareness and understanding of the scale and nature of the challenges impacting public conversation online, and it also helps to keep services like Twitter accountable for our own response to these challenges.
Additionally, throughout the pandemic, we launched a COVID-19 endpoint to empower public health research, and a new academic platform to encourage cutting-edge research using Twitter data. This is one of the reasons you hear more about reports featuring Twitter as core to the research methodology — we intentionally empower this.
Transparency is key to building and sustaining trust, improving accountability, and preserving a free and secure Open Internet. People should understand the rules of online services and the way that governmental legal powers are used. Without transparency, there can be no accountability.
For our part, Twitter aims to continually evaluate and improve the way we share information with the public. This year, we are launching the Twitter Moderation Research Consortium (TMRC). Through the Consortium, Twitter shares large-scale datasets concerning platform moderation issues with a global group of public interest researchers from across academia, civil society, NGOs and journalism studying platform governance issues. This program will initially focus on sharing data about accounts and networks Twitter has removed in connection with platform manipulation and state-backed information operations, enabling credible, reputable academics and researchers to find insights in and contextualize the data.
While the way that we have reported this information has continually developed and improved over the past 10 years, our commitment to protecting the people who use Twitter remains unchanged. That includes protecting activists and journalists, accounts that wish to remain anonymous, and those who speak up against their own governments.
To view the full report, click here.
Did someone say … cookies?