Manipulation of the public conversation is a shared threat we all face, and must all address together.
It is a human activity that - we all know - has been around for quite some time, and isn’t showing any signs of going away. It pre-dates the Internet and modern communications technology of course, but has adapted and changed as online spaces have emerged to become contested territories of geo-political competition.
It would be naive to think that events and conversations in the future will not be targeted by bad actors - in addition to elections, conversations around Covid-19, climate action, identity and civil rights, to name a few, are all vulnerable to similar techniques. We have to respond. The first step towards doing so is by understanding the problem in detail — a goal that Twitter and think tank Demos believe can best be advanced through transparency and open access to data.
Platforms like Twitter have taken a number of important steps to confront this problem, for example - having a dedicated site integrity team and continuous investment in technology to detect, understand and neutralize these campaigns as quickly and robustly as possible - but technology companies can’t do it alone. Academics, journalists and civil society groups must all be strong - and genuinely independent - voices in the debates related to information operations and how to respond to them. And to achieve this, one thing - above all - is needed: data.
Data is the bedrock of a strong civic societal response to information operations. It is vital to allow researchers to detect information operations, understand what they’re targeting, what effects they’re having, the interests and organisations that are conducting them and measuring the effectiveness of responses to them. It is the way of making the information operations themselves, and the responses to them, transparent and inclusive.
Transparency is foundational, indeed, to the kind of Internet that we all want to see - empowering consumers, building trust and strengthening democracies. And it is worth noting here that Twitter is the only major service to make public conversation data available via an API, for the purposes of study. Making this type of data available to researchers has resulted in a number of important benefits.
First, publicly available data can advance research objectives on a wide range of topics and in a safe, compliant way with the public’s basic expectation of privacy. This has been the case particularly during Covid-19, where we have seen research teams use public Twitter data to map and examine aggregate increases in reported symptoms or anxiety levels.
Second, it raises general awareness and increases understanding more widely of the scale and nature of the challenges impacting the integrity of public conversation online,. This is why in 2018, Twitter committed to disclose publicly, any state-backed information operations that were reliably identified on the service, and to make the full datasets of those operations available for investigation and analysis. Since this first release over two years ago, Twitter has now disclosed over 35 separate state-backed information operations designed to nefariously shape and manipulate public opinion online. Independent analysis of this activity by researchers is a key step toward promoting shared understanding of these threats and to help develop a holistic strategy for addressing them.
And third, making this data available keeps platforms like Twitter accountable for their own response to these challenges. The nature of conversations taking place on Twitter is well-documented and, critically, members of the public, governments, and researchers can bring their expertise to bear to develop solutions for a range of online harms. However, as Twitter’s CEO, Jack Dorsey has said, there is much more to do when it comes to transparency; and the team within Twitter who work with researchers are part of that, constantly looking for opportunities to provide new data while balancing privacy considerations.
The reality, however, is that there are very different standards for transparency across the industry. One challenge is that so much research of online harms is built on Twitter data because they are one of the few companies to offer it. Another is the amplification of poor quality, non peer-reviewed and misleading “research” by some pockets of media and, on occasion, elected officials to suit a predetermined narrative. More broadly, we continue to encourage peer-reviewing of research and data before publication. To not engage in these practices, more often than not, results in public scare-mongering.
Independent analysis of these activities by researchers is a key step toward promoting shared understanding of these threats, and this level of transparency can enhance the health of public conversation online, and protect the Open Internet for all. It’s our shared responsibility in academia, industry, policymaking, and research to consider how greater data transparency and support for civil society can be at the centre of our response to online harms. That need, much like information operations itself, isn’t going anywhere either.
On Thursday 26th November, Twitter and Demos hosted a panel discussion on Data, Research and Information Operations. Moderated by Head of UK Public Policy for Twitter, Katy Minshall, we heard from the expert panel of Carl Miller from Demos, Nahema Marchal from Oxford Internet Institute and Alex Martin from Sky News.