Company

Our approach to bots and misinformation

By
Wednesday, 14 June 2017

Twitter’s job is to keep people informed about what’s happening in the world. As such, we care deeply about the issue of misinformation and its potentially harmful effect on the civic and political discourse that is core to our mission.

Fake news has become a catchphrase used to describe everything from manufactured news stories, incorrect information and state-supported propaganda, to news some people don't like or points of view with which people disagree.

Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information. This is important because we cannot distinguish whether every single Tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth. Journalists, experts and engaged citizens Tweet side-by-side correcting and challenging public discourse in seconds. These vital interactions happen on Twitter every day, and we’re working to ensure we are surfacing the highest quality and most relevant content and context first.

Bots

While bots can be a positive and vital tool, from customer support to public safety, we strictly prohibit the use of bots and other networks of manipulation to undermine the core functionality of our service. We’ve been doubling down on our efforts here, expanding our team and resources, and building new tools and processes. We’ll continue to iterate, learn, and make improvements on a rolling basis to ensure our tech is effective in the face of new challenges.

We’re working hard to detect spammy behaviors at source, such as the mass distribution of Tweets or attempts to manipulate trending topics. We also reduce the visibility of potentially spammy Tweets or accounts while we investigate whether a policy violation has occurred. When we do detect duplicative, or suspicious activity, we suspend accounts. We also frequently take action against applications that abuse the public API to automate activity on Twitter, stopping potentially manipulative bots at the source.

It’s worth noting that in order to respond to this challenge efficiently and to ensure people cannot circumvent these safeguards, we’re unable to share the details of these internal signals in our public API. While this means research conducted by third parties about the impact of bots on Twitter is often inaccurate and methodologically flawed, we must protect the future effectiveness of our work.

We’re committed to doing our part and we’ll continue to share updates along the way.

This post is unavailable
This post is unavailable.