Every day, people come to Twitter to see what’s happening. One of the most important parts of our focus on improving the health of conversations on Twitter is ensuring people have access to credible, relevant, and high-quality information on Twitter. To help move toward this goal, we’ve introduced new measures to fight abuse and trolls, new policies on hateful conduct and violent extremism, and are bringing in new technology and staff to fight spam and abuse.
But we know there’s still a lot of work to be done. Inauthentic accounts, spam, and malicious automation disrupt everyone’s experience on Twitter, and we will never be done with our efforts to identify and prevent attempts to manipulate conversations on our platform.
We’re excited to share some recent progress and new measures for how we handle spam, malicious automation, and platform manipulation.
New processes for fighting malicious automation and spam
Twitter fights spam and malicious automation strategically and at scale. Our focus is increasingly on proactively identifying problematic accounts and behavior rather than waiting until we receive a report. We focus on developing machine learning tools that identify and take action on networks of spammy or automated accounts automatically. This lets us tackle attempts to manipulate conversations on Twitter at scale, across languages and time zones, without relying on reactive reports.
Our investments in this space are having a positive impact:
These numbers tell us that our tools are working: We’re preventing or catching more of this activity ourselves before you ever see it on Twitter.
Platform manipulation and spam are challenges we continue to face and which continue to evolve, and we’re striving to be more transparent with you about our work. Today, we’re sharing four new steps we’re taking to address these issues:
1) Reducing the visibility of suspicious accounts in Tweet and account metrics
A common form of spammy and automated behavior is following accounts in coordinated, bulk ways. Often accounts engaged in these activities are successfully caught by our automated detection tools (and removed from our active user metrics) shortly after the behavior begins. But we haven’t done enough in the past to make the impact of our detections and actions clear. That’s why we’ve started updating account metrics in near-real time: for example, the number of followers an account has, or the number of likes or Retweets a Tweet receives, will be correctly updated when we take action on accounts.
So, if we put an account into a read-only state (where the account can’t engage with others or Tweet) because our systems have detected it behaving suspiciously, we now remove it from follower figures and engagement counts until it passes a challenge, like confirming a phone number. We also display a warning on read-only accounts and prevent new accounts from following them to help prevent inadvertent exposure to potentially malicious content. If the account passes the challenge, its footprint will be restored (though it may take a few hours). We are working to make these protections more transparent to anyone who may try to interact with an account in this read-only state.
As a result of these improvements, some people may notice their own account metrics change more regularly. But we think this is an important shift in how we display Tweet and account information to ensure that malicious actors aren’t able to artificially boost an account’s credibility permanently by inflating metrics like the number of followers.
Stay tuned: In the coming weeks, we will have more to share about additional steps we’re taking to reduce the impact of this sort of activity on Twitter.
2) Improving our sign-up process
To make it harder to register spam accounts, we’re also going to require new accounts to confirm either an email address or phone number when they sign up to Twitter. This is an important change to defend against people who try to take advantage of our openness. We will be working closely with our Trust and Safety Council and other expert NGOs to ensure this change does not hurt someone in a high-risk environment where anonymity is important. Look for this to roll out later this year.
3) Auditing existing accounts for signs of automated sign-up
We’re also conducting an audit to secure a number of legacy systems used to create accounts. Our goal is to ensure that every account created on Twitter has passed some simple, automatic security checks designed to prevent automated signups. The new protections we’ve developed as a result of this audit have already helped us prevent more than 50,000 spammy sign-ups per day.
As part of this audit, we’re imminently taking action to challenge a large number of suspected spam accounts that we caught as part of an investigation into misuse of an old part of the sign-up flow. These accounts are primarily follow spammers, who in many cases appear to have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during our signup flow. As a result of this action, some people may see their follower counts drop; when we challenge an account, follows originating from that account are hidden until the account owner passes that challenge. This does not mean accounts appearing to lose followers did anything wrong; they were the targets of spam that we are now cleaning up. We've recently been taking more steps to clean up spam and automated activity and close the loopholes they'd exploited, and are working to be more transparent about these kinds of actions.
4) Expansion of our malicious behavior detection systems
We’re also now automating some processes where we see suspicious account activity, like exceptionally high-volume Tweeting with the same hashtag, or using the same @username without a reply from the account you’re mentioning. These tests vary in intensity, and at a simple level may involve the account owner completing a simple reCAPTCHA process or a password reset request. More complex cases are automatically passed to our team for review.
What you can do
There are important steps you can take to protect your security on Twitter:
Additionally, if you believe you may have been incorrectly actioned by one of our automated spam detection systems, you can use our appeals process to request a review of your case.
Going forward, Twitter is continuing to invest across the board in our approach to these issues, including leveraging machine learning technology and partnerships with third parties. We also look forward to soon announcing the results of our request for proposals for public health metrics research.
These issues are felt around the world, from elections to emergency events and high-profile public conversations. As we have stated in recent announcements, the public health of the conversation on Twitter is a critical metric by which we will measure our success in these areas.