Around the world, people use Twitter to find reliable information in real time. During periods of crisis – such as situations of armed conflict, public health emergencies, and large-scale natural disasters – access to credible, authoritative information and resources is all the more critical.
Today, we’re introducing our crisis misinformation policy – a global policy that will guide our efforts to elevate credible, authoritative information, and will help to ensure viral misinformation isn’t amplified or recommended by us during crises. In times of crisis, misleading information can undermine public trust and cause further harm to already vulnerable communities. Alongside our existing work to make reliable information more accessible during crisis events, this new approach will help to slow the spread by us of the most visible, misleading content, particularly that which could lead to severe harms.
Developing the policy
Teams at Twitter have worked to develop a crisis misinformation framework since last year, drawing on key input from global experts and human rights organizations. For the purposes of this policy, we define crises as situations in which there is a widespread threat to life, physical safety, health, or basic subsistence. This definition is consistent with the United Nations’ definition of a humanitarian crisis and other humanitarian assessments.
Down the line, as we expand our approach, we will enforce around other emergent global crises, informed by the United Nations Inter-Agency Standing Committee (IASC)’s emergency response framework, and other global humanitarian frameworks.
Addressing the most severe harms
During moments of crisis, establishing whether something is true or false can be exceptionally challenging. To determine whether claims are misleading, we require verification from multiple credible, publicly available sources, including evidence from conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more.
Conversation moves quickly during periods of crisis, and content from accounts with wide reach are most likely to rack up views and engagement. To reduce potential harm, as soon as we have evidence that a claim may be misleading, we won’t amplify or recommend content that is covered by this policy across Twitter – including in the Home timeline, Search, and Explore. In addition, we will prioritize adding warning notices to highly visible Tweets and Tweets from high profile accounts, such as state-affiliated media accounts, verified, official government accounts.
Some examples of Tweets that we may add a warning notice to include:
Strong commentary, efforts to debunk or fact check, and personal anecdotes or first person accounts do not fall within the scope of the policy.
What you’ll see on Twitter
Tweets with content that violate the crisis misinformation policy will be placed behind a warning notice that looks like this:
People on Twitter will be required to click through the warning notice to view the Tweet, and the content won’t be amplified or recommended across the service. In addition, Likes, Retweets, and Shares will be disabled, and the notice will link to more information about our approach to crisis misinformation.
Content moderation is more than just leaving up or taking down content, and we’ve expanded the range of actions we may take to ensure they’re proportionate to the severity of the potential harm. We’ve found that not amplifying or recommending certain content, adding context through labels, and in severe cases, disabling engagement with the Tweets, are effective ways to mitigate harm, while still preserving speech and records of critical global events.
While this first iteration is focused on international armed conflict, starting with the war in Ukraine, we plan to update and expand the policy to include additional forms of crisis. The policy will supplement our existing work deployed during other global crises, such as in Afghanistan, Ethiopia, and India.
You can read more about our crisis misinformation policy in the Help Center.
Did someone say … cookies?