Stories

Four truths about bots

By
Tuesday, 21 September 2021

How do you tell if someone on Twitter is a person or a robot?

A recent internal survey of daily people on Twitter across four countries revealed that their greatest concern when using the platform is that “there are too many bots or fake accounts.”

So Common Thread sat down with Twitter’s resident bot expert to understand the problem and clear up misconceptions about bots on Twitter. Yoel Roth is the Head of Site Integrity for Twitter, overseeing the multiple teams that write and enforce the rules around spam, bots, misinformation, disinformation, and more. One of the teams, the platform integrity team, is tasked with sorting humans from robots (although they can’t help you if you’re a robot and you don’t know it). 

Not all bots are bad. Some, like @tinycarebot or @queerlitbot, are delightful. Twitter just launched a label for "good bots" as part of an effort to provide more context for accounts you may interact with. (Twitter also rolled out labels for political candidates in late 2019). But the bots that the platform integrity team deals with are generally fake accounts deliberately created to distort information or manipulate people on Twitter. The first way the integrity team sniffs out bots is through machine learning. They train algorithms to recognize common patterns of malicious activity; these automatically challenge between 5 to 10M accounts a week. 

The second, and perhaps more important line of defense, is a forensic team of investigators. These are real people whose job it is to figure out the more complicated questions of whether an account is automated, a real person being paid to behave like a bot, or a real person who just resembles a bot. Sometimes the team works with outside experts like investigative journalists to crack sophisticated campaigns. Even after identifying bots, the task of taking down fake accounts can resemble a game of whack-a-mole.   

But Roth — who has worked on the safety team since 2014, forever in Twitter years — keeps the big picture in mind. He says it’s not the number of bots, (around 5%, a number Twitter reports quarterly) but the impact they have on the conversation. Roth’s team measures influence through impressions, or the number of people who see a Tweet. More on how they do that, below, plus four truths you should know about bots. 

First truth: Don’t assume an account with a peculiar name must be a bot.

One of the first ways we all learned to identify bots is an account with the telltale string of jumbled letters and numbers. This is a commonly accepted shorthand for spotting a fake account, presumably because automatically generating multiple accounts is easier with auto-assigned handles. 

“I’ll raise my hand on behalf of Twitter and say, this one’s on us. We're assigning people handles that people believe are bots, but those are actually real people on the other end of the interactions,” said Roth. 

Most of the time, Twitter’s name-generating algorithm creates handles with first names and a string of numbers. Roth says that having auto-handles is a frictionless way to get people onto Twitter right away, without asking them to come up with a catchy name on their own. But that means that some new people on Twitter can end up looking like bots. It gets even more complicated when you account for the fact that Twitter is a global platform, but the current handle-generating algorithm can only create names with Roman characters and numbers. A person signing up for Twitter for the first time may try to give themselves a unique name in Mandarin, Arabic, or Hebrew but have that name automatically translated into alphanumeric nonsense. 

“At that point, people look at them and say, ‘right, this must be a bot.’ It's why so many times it's accounts from China and accounts from the Middle East that people assume are bots,” he said. “And so the distilled version of this is, it's really hard to judge a book by its cover when it comes to Twitter accounts.”

Second truth: Some people just Tweet a whole lot (like hundreds of Tweets a day), but it doesn’t mean they are bots.

Another telltale sign of bot behavior is prolific Tweeting and Retweeting, in the tens or hundreds of Tweets per hour. 

“​​A lot of times when people are talking about bots, they're imagining bulk, unsolicited, spammy activity. Like, I set up something and it's going to automatically Retweet every time Ariana Grande Tweets,” Roth said. But prolific activity is also, it turns out, a sign of people behavior. “When we've done a lot of research with people who are very high-volume engagers, what we find is some of them are spam and fake accounts. But, actually a lot of times, we run into people who are just using Twitter in unexpected ways.” 

Roth’s team has looked into accounts that like hundreds of Tweets every day, only to find real people who explain they read a lot of Tweets and want to show their appreciation by liking each one. Then there are the people who have a high volume of activity because they’re homebound, their job focuses on social media, or they’re just really passionate about their interests.

There's no normal way to use Twitter, Roth says. Particularly because the cultural norms for how to use a platform like Twitter shift depending on the individual, their community, and their nationality. 

“A lot of times when we see people accuse each other of being bots, what we're actually seeing are real people who are just using the platform in different ways,” Roth said.

Third truth: Real people have opinions that you will disagree with — it doesn’t mean they are part of a grand manipulation scheme.

After the 2016 election, a combination of increased news coverage and general consumer savviness led to a growing awareness that social media could be weaponized to divide people. As literacy around social media manipulation grew, Roth noticed another trend: An uptick in the number of people who accused each other of being bots when they disagreed with their message. 

“On one hand, I'm really glad that people are now talking about social media manipulation in a way where they understand what it is. They understand that it's taking place, and they understand what they need to do to defend themselves against it,” he said. “The problem is manipulation becomes shorthand for any situation in which you don't want to engage with somebody that you disagree with.” 

As an experiment, Roth, who has a Ph.D. in communications from the University of Pennsylvania, decided to run an internal data analysis that would send him daily a sample of Tweets where one person replied to another with a variation on the phrase, “Whatever, bot.” After weeks of running the experiment, Roth counted the number of bots he found. 

“I'd go one-by-one through those accounts and say, how many of these are actually automated, fake, inauthentic, or part of a Russian manipulation campaign? The number was zero,” he said. 

In this context, the “bot” problem is part of the larger issue of toxic and hostile interactions on Twitter. Roth thinks it’s more dangerous to believe that someone online who expresses a different opinion is automatically part of a misinformation campaign because it dehumanizes real people. 

This post is unavailable
This post is unavailable.

“You're going to encounter things that you don't like or don't agree with. And there should be ways for you to control that and be safe online,” Roth said. “But some of that is needing to recognize that people you disagree with are still human, and you can't just dismiss their humanity by calling them a bot.”

Fourth truth: Seeing doesn’t always lead to believing.

So there are people who may look like bots on Twitter, and then there are bots on Twitter. The real question is, how much do those bots influence or manipulate our conversations? 

“A core truth of all the discourse on platform manipulation and bots is, the fact they exist does not necessarily mean that they are influencing conversations,” he said. 

It’s difficult to measure how something online impacts people’s behavior in real life. If a Tweet is abusive or violates the rules, the team tries to remove it as quickly as possible so that it receives fewer impressions. Sometimes, they suspend whole accounts. 

But more often, given the scale of spam,  they may reduce the visibility of the Tweet by prohibiting it from being amplified to people who do not follow the account or it may not show up in top results for searches, trends, or conversations. 

“So one of the ways that we think about impact is just how many people saw the thing. If you're studying this closely, you'll say, right, people saw it, but did it matter? Did it influence them or get them to change their minds?” Roth said.

There’s been a lot of academic research in this area. A professor at the University of Pennsylvania, Kathleen Hall Jamieson, researched whether Russian interference on social media influenced voting behavior in swing states during the 2016 election. There were over 130,000 Tweets posted by Russian actors on Twitter during that time, including bots pretending to be the Tennessee GOP, a Black woman from New York named Crystal Johnson, and a woman from the South named Pamela Moore. Jamieson found that while there wasn’t evidence that the Russian disinformation campaign directly influenced how people voted, it did influence who voted. 

In short, it’s incredibly challenging to draw a straight line from what people read on the internet to what they do in real life. Roth says the idea that the media we consume is manipulating our beliefs goes back to the advent of radio. 

“In a lot of ways, when we talk about Russian bots and these types of propaganda efforts, we're saying the same thing about social media — seeing something means that you now believe it. And the answer is it's a lot more complicated than that,” Roth said.

 

This post is unavailable
This post is unavailable.