Company

Information operations on Twitter: principles, process, and disclosure

By
Thursday, 13 June 2019

In October 2018, we published the first comprehensive archive of Tweets and media associated with known state-backed information operations on Twitter. Since its launch, thousands of researchers from across the globe have downloaded datasets, which contain more than 30 million Tweets and over 1 terabyte of media, using our archive to conduct their own investigations and to share their insights and independent analysis with the world. Today, we’re adding six additional datasets to our archive, covering coordinated, state-backed activities originating from four jurisdictions. All accounts have been removed from Twitter.

We believe that people and organizations with the advantages of institutional power and which consciously abuse our service are not advancing healthy discourse but are actively working to undermine it. By making this data open and accessible, we seek to empower researchers, journalists, governments, and members of the public to deepen their understanding of critical issues impacting the integrity of public conversation online, particularly around elections. This transparency is core to our mission.

Today’s disclosures

Iran (4,779 accounts)
The below account sets all originated in Iran, and we believe all are associated with — or directly backed by — the Iranian government. However, the signals and behaviors of each set were individually different. We’ve broken them down accordingly:

  • Set one (1,666 accounts): We removed more than 1,600 accounts originating in Iran. Cumulatively, these accounts Tweeted nearly 2 million times. They Tweeted global news content, often with an angle that benefited the diplomatic and geostrategic views of the Iranian state. Platform manipulation is a violation of the Twitter Rules.
  • Set two (248 accounts): In addition to the 1,600 accounts listed above, we took action on a second set of more than 200 accounts originating in Iran which were more directly engaged with discussions related to Israel specifically.
  • Set three (2,865 accounts): Recently, we discussed an action to remove more than 2,800 accounts originating in Iran. These accounts employed a range of false personas to target conversations about political and social issues in Iran and globally.

Russia (4 accounts)
As part of our ongoing investigations into activity connected with the Russian Internet Research Agency (IRA), we removed four accounts which we believe are associated with the IRA. These removals are the result of increased information sharing between industry peers and law enforcement. For more on our removal of IRA-specific accounts around the 2016 US Presidential Election, see here.

Spain (130 accounts)
Earlier this year, we suspended 130 fake accounts originating in Spain. These accounts were directly associated with the Catalan independence movement, specifically Esquerra Republicana de Catalunya. They were primarily engaged in spreading content about the Catalan Referendum. The network includes fake accounts which appear to have been created with the intent to inorganically influence the conversation in politically advantageous ways. Setting up fake accounts is a violation of the Twitter Rules, full stop.

Venezuela (33 accounts)
In addition to a previously disclosed domestically-focused set, this is the second time we identified accounts originating within Venezuela that were engaging in platform manipulation targeted outside of the country. Today’s disclosure is comprised of 33 additional accounts directly connected with the previous group of 764 accounts (published to the archive in January). While there were initial indications that these accounts were associated with the Russian Internet Research Agency, our further analysis suggests that they were operated by a commercial entity originating in Venezuela. We are sharing further data on this group to update the public on our attribution efforts.

We also want to take this opportunity to comprehensively share our principles and process for future disclosures. Below are answers to some of the most common questions we receive regarding our handling of information operations on Twitter.

What are our guiding principles in this work?
We believe Twitter has a responsibility to protect the integrity of the public conversation — including through the timely disclosure of information about attempts to manipulate Twitter to influence elections and other civic conversations by foreign or domestic state-backed entities. We believe the public and research community are better informed by transparency.

How do we do it?
Our Site Integrity team is dedicated to identifying and investigating suspected platform manipulation on Twitter, including potential state-backed activity. In partnership with teams across the company, we employ a range of open-source and proprietary signals and tools to identify when attempted coordinated manipulation may be taking place, as well as the actors responsible for it. We also partner closely with governments, law enforcement, and our peer companies to improve our understanding of the actors involved in information operations and develop a holistic strategy for addressing them.

How do I access the full archive of content and Tweets?
The complete public archive of content and Tweets are available on our dedicated Election Integrity Hub.

What is the makeup of the team at Twitter working on these issues?
Teams across the company contribute to our research, analysis, and investigation efforts related to information operations. These teams include data scientists, linguists, policy analysts, political scientists, and technical experts in abuse and anti-spam issues.

What policies do you enforce and how do you mitigate the risk of suppressing legitimate speech from political parties?
Our policies are focused on misleading, deceptive, and spammy behavior, and are specifically intended to differentiate between coordinated manipulative behavior and legitimate speech from individuals and political parties. The policies we enforce most frequently in this context include:

  • Platform manipulation and spam
  • Coordinated activity
  • Fake accounts
  • Attributed activity
  • Distribution of hacked materials
  • Ban evasion

We enforce these policies without regard for the specific entities involved. However, as we discuss below, our decision to disclose datasets related to these activities are impacted by our ability to definitively attribute.

What are your standards for disclosure?
First, as noted above, we only disclose datasets associated with coordinated malicious activity that we are able to reliably associate with state-affiliated actors. For privacy and safety reasons, we do not disclose information about individuals or accounts not affiliated with a state actor.

Second, given the challenges of attribution, we require clear, verifiable associations between accounts we identify and state-affiliated actors. While we enforce our rules proactively and at scale, disclosure of datasets requires additional evidence of coordinated, state-backed activity.

Finally, we rigorously quality-check the resulting datasets in an attempt to eliminate potential false positives caused by account compromises or analytical error. Mistakes can happen, and we do our best to avoid this at every stage of our investigations. This process takes time.

How does Twitter decide when to disclose datasets?
Timing varies. If and when we identify malicious activity on Twitter, our first priority is to enforce our rules and remove accounts engaged in attempts to manipulate the public conversation. Following these enforcements, we carry out thorough investigations of the accounts and individuals involved. This analysis can take anywhere from several days to many months — and in some instances, subsequent enforcement actions may allow us to retrospectively attribute activity we enforced against in the past. We only disclose datasets once we have determined attribution, and once all applicable investigations have concluded. We also proactively notify law enforcement, our peers, and other relevant state agencies.

Why the focus only on state-affiliated actors?
When we have significant evidence to indicate that state-affiliated entities are knowingly trying to manipulate and distort the public conversation, we believe it should be disclosed as a matter of public interest. People and organizations with the advantages of institutional power and which consciously abuse our service are not advancing healthy discourse but are actively working to undermine it. This is a violation of our company principles, policies, and overarching mission to serve the public discourse.

You’ve said that you challenge millions of accounts per week for engaging in platform manipulation. Why not disclose information about those?
We do. Twice a year, we share information about our actions to detect and prevent platform manipulation and spam in the Twitter Transparency Report. Our goal in disclosing additional datasets related to malicious state-backed activity on Twitter specifically is to enable research that improves the public understanding of information operations. While other forms of platform manipulation and spam may at times be involved in these operations, we do not disclose specific, Tweet-level data about these activities unless they are directly associated by clear, technical indicators to a specific, attributed campaign by a state-actor. We do, however, have a public API that allows researchers to investigate a small subset of Tweets to further public awareness of the conversation on Twitter.

What about research I see on Twitter and ‘bots’?
Non-peer reviewed research using our public API can often be deeply flawed. We see a lot of commercially-driven and non-peer reviewed research that make sweeping assessments of account behaviors only using public signals, such as location (if cited), account content, how often an account Tweets, and the accounts it follows. To be clear: none of these indicators are sufficient to determine attribution to a state entity definitively. Looking for accounts that look similar to those disclosed as part of our archives is an equally flawed approach, given many of the bad actors mimic legitimate accounts to appear credible. This approach also often wrongly captures legitimate voices who share a particular political viewpoint that one disagrees with.

We work with thousands of signals and behaviors to inform our analysis and investigation. Furthermore, none of our preemptive work to challenge accounts for platform manipulation (up to 8-10 million accounts per week) are visible in the small sample available in our public API. Before engaging in this type of research and making these claims, ethical norms should be considered. To do otherwise does not further public knowledge but rather risks deeply undermining trust in public debate and conversation.

For more on our archive of information operations, please visit our dedicated site and follow the conversation at @Policy and @TwitterSafety.

This post is unavailable
This post is unavailable.