Company

Addressing the abuse of tech to spread terrorist and extremist content

By Twitter Public Policy
Wednesday, 15 May 2019

In addition to our commitment to the Christchurch Call, Twitter, Facebook, Microsoft, Google, and Amazon commit to the following:

Five Individual Actions:

1. Terms of Use. We commit to updating our terms of use, community standards, codes of conduct, and acceptable use policies to expressly prohibit the distribution of terrorist and violent extremist content. We believe this is important to establish baseline expectations for users and to articulate a clear basis for removal of this content from our platforms and services and suspension or closure of accounts distributing such content. 

2. User Reporting of Terrorist and Violent Extremist Content. We commit to establishing one or more methods within our online platforms and services for users to report or flag inappropriate content, including terrorist and violent extremist content. We will ensure that the reporting mechanisms are clear, conspicuous, and easy to use, and provide enough categorical granularity to allow the company to prioritize and act promptly upon notification of terrorist or violent extremist content.

3. Enhancing Technology. We commit to continuing to invest in technology that improves our capability to detect and remove terrorist and violent extremist content online, including the extension or development of digital fingerprinting and AI based technology solutions.

4. Livestreaming. We commit to identifying appropriate checks on livestreaming, aimed at reducing the risk of disseminating terrorist and violent extremist content online. These may include enhanced vetting measures (such as streamer ratings or scores, account activity, or validation processes) and moderation of certain livestreaming events where appropriate. Checks on livestreaming necessarily will be tailored to the context of specific livestreaming services, including the type of audience, the nature or character of the livestreaming service, and the likelihood of exploitation. 

5. Transparency Reports. We commit to publishing on a regular basis transparency reports regarding detection and removal of terrorist or violent extremist content on our online platforms and services and ensuring that the data is supported by a reasonable and explainable methodology.

Four Collaborative Actions:

1. Share Technology Development. We commit working collaboratively across industry, governments, educational institutions, and NGOs to develop a shared understanding of the contexts in which terrorist and violent extremist content is published and to improve technology to detect and remove terrorist and violent extremist content more effectively and efficiently. This will include:

  • Work to create robust shared data sets to accelerate machine learning and AI and sharing insights and learnings from the data.
  • Development of open source or other shared tools to detect and remove terrorist or violent extremist content.
  • Enablement of all companies, large and small, to contribute to the collective effort and to better address detection and removal of this content on their platforms and services.

2. Crisis Protocols. We commit to working collaboratively across industry, governments, and NGOs to create a protocol for responding to emerging or active events, on an urgent basis, so relevant information can be quickly and efficiently shared, processed, and acted upon by all stakeholders with minimal delay. This includes the establishment of incident management teams that coordinate actions and broadly distribute information that is in the public interest.

3. Education. We commit to working collaboratively across industry governments, educational institutions, and NGOs to help understand and educate the public about terrorist and extremist violent content online. This educating and reminding users about how to report or otherwise not contribute to the spread of this content online.

4. Combatting Hate and Bigotry. We commit to working collaboratively across industry to attack the root causes of extremism and hate online. This includes providing greater support for relevant research — with an emphasis on the impact of online hate on offline discrimination and violence — and supporting capacity and capability of NGOs working to challenge hate and promote pluralism and respect online. 

This Tweet is unavailable
This Tweet is unavailable.
@policy

Twitter Public Policy

‎@policy‎ verified

Global Public Policy, Twitter