Stories

The secret world of good bots

By
Friday, 29 October 2021

What comes to mind when you think of a bot on Twitter? You probably picture a spammy account sliding into your DMs or a Russian troll farm pumping out fake news and conspiracy theories. Thanks in large part to misinformation campaigns waged on social media during the US 2016 election, a lot of people associate bots with this type of nefarious activity. They have good reason to: Twitter detects roughly 25M accounts per month suspected of being automated or spam accounts. In fact, in the second half of 2020, it deployed 143M anti-spam challenges to accounts, which helped bring spam reports — those coming from people who flag Tweets as spam — down by about 18% from the first half of the year.

Twitter has an entire enforcement team dedicated to tracking down these accounts and banning them. But it’s not as simple as blanket-banning all automated accounts.

Bots actually come in all shapes and sizes, and chances are, you’re already following one that you like. Like a COVID-19 bot that alerts you to vaccine availability in your area, an earthquake bot that alerts you to tremors in your region, or an art bot that delivers a colorful dose of delight on your timeline. How these bots are represented on Twitter is almost as important as what they do for their followers. 

That’s where Oliver Stewart comes in. As the lead researcher on Twitter’s Identity and Profiles team, he wanted to understand how interpersonal trust is developed on Twitter and how automated accounts could affect that trust. 

“There are many bots on Twitter that do good things and that are helpful to people,” said Stewart. “We wanted to understand more about what those look like so we could help people identify them and feel more comfortable in their understanding of the space they’re in.”  

Stewart’s team revealed that people found content more trustworthy if they know more about who’s sharing it—starting with whether that account is human or automated. To help address the issue of bots, Twitter recently rolled out new labels that identify bots with an “automated” designation in their profile, an icon of a robot, and a link to the Twitter handle of the person who created the bot. “Not only are we just labeling these bots, we're also saying: this is the owner, and this is why they're here,” said Stewart. “Based on the preliminary research that we have, we hypothesize that that's going to create an environment where you can trust those bots a lot more.”  

So why go to the trouble of labeling bots, instead of banning them all from Twitter?

“It's not inherently wrong to have an automated account on Twitter; obviously automated accounts don't have to be terrible. There was a vaccine bot that was really popular in New York,” said Dante Clemons, the senior product manager tasked with creating and testing these labels. She was referring to the Turbovax bot that Tweeted vaccine appointments to its 160,000 followers. “I focused on those accounts because these are the ones that can help us all reframe how we think about bots.” 

This post is unavailable
This post is unavailable.
This post is unavailable.

The labels themselves don’t call bots good or bad, it just gives people the signal that it's automated. “If it's compliant with Twitter's rules, we're OK with it being on the platform. For the ones that are noncompliant, we're already actively doing the work to remove those off Twitter,” she said.

The secret life of a bot developer

Clemons decided to start the experiment small, working with 10 developers Twitter had established relationships with, who volunteered to label their 532 bot accounts. Most of those bots—around 500 of them—were created by artist and bot developer Andrei Taraschuk

A software engineer by day in Boulder, Colorado, Taraschuk began creating art bots to share his love of fine art with his followers. A traditionally trained painter from a family of artists, Taraschuk’s art bots have over 4 million followers, including Twitter CEO Jack Dorsey. Taraschuk and his developer partner Cody Braun have created bots for museums like the Metropolitan Museum of Art, the Art Institute of Chicago, and the Guggenheim, to help them share their collections—all on their own time and with their own funding. 

“We have this expectation that humans are more authentic, that interacting with a human is better. But the other side of that is that when it comes to art, humans introduced their personal biases,” said Taraschuk. “​​Bots are actually better in so many use cases than humans. They never forget, they never tire of sharing. They remember exactly what they shared and what they didn't.” 

In a given month, the art bots Tweet nearly 250,000 works. In September, they received around 3 million likes and 10,000 comments. No person could respond to that many comments, but a bot can. 

Taraschuk says having labels helps manage people’s expectations when interacting with an automated account, particularly one that takes on the identity of a person. “Like when you follow Monet, I didn't want people to think that, you know, they're following Monet.”

​​Taraschuk first came into contact with the developer outreach team at Twitter when all his art bots were deleted overnight, around the time of the 2016 election. This was a time when automated accounts became mostly associated with election interference and disinformation. He became one of the earlier advocates for bots, speaking to teams across Twitter to educate them about their potential. 

“This started a long journey of me talking to Twitter and pushing this idea like, ‘Hey, you know, even though I'm Russian, and Russians have been known for making like these nefarious bots, these bots are actually bringing culture to Twitter,” he said. 

This post is unavailable
This post is unavailable.

Other developers created bots with the purpose of making Twitter a more accessible platform. Hannah Kolbeck, creator of the Alt Text Reminder and Alt Text Crew bots, is also one of the first round of bot labels. A software engineer in Portland, Oregon, Kolbeck got the idea for her bot from her local activist community on Twitter. Alt Text Reminder DMs its followers when they Tweet an image without a description, reminding them to put alt text in their Tweet to make it accessible. (Alt text is a description of a picture that can be read by a screen reader, and makes images accessible to the visually-impaired or other people using a screen reader.) 

Kolbeck also programmed the bot to publicly Tweet at The Oregonian, Oregon’s Pulitzer Prize-winning newspaper, to remind them to put alt text on their images. She says that The Oregonian now adds alt text to at least 75% of its Tweets.

“If you're a business and you're not accessible, that's a problem,” said Kolbeck. “Folks who couldn't see the images were being excluded from a major part of public life in Oregon.”

Kolbeck, like Taraschuk, likes the bot labeling because people have confused her bot with a real person. “People would respond to it with, ‘How are you responding to every post? Don't you have a life?’ And it's like, no. Literally no.” 

Balancing trust and safety

On the plus side, Kolbeck thinks that an explicit marker on bots will help people trust her bots a little more. But she also sees a potential downside. Kolbeck uses the example of Editing the Gray Lady, a bot that Tweets every time The New York Times edits a headline or part of an article. While she doesn’t know who created the bot, she’s cautious about how ​​a high-profile account like that could potentially expose the developer to harassment.

That’s one of the balancing acts Twitter teams have to perform when negotiating the tension between verification and safety. Stewart, who started his career at Twitter researching whether to require customers to use their real identity, believes that allowing anonymity is one of the platform’s strengths. 

“We don't have an identity system that's based on knowing that I'm a straight, white guy called Oliver Stewart and I live in Colorado. For many people, particularly people from vulnerable or persecuted communities, minority groups, or activists and journalists, sharing that kind of information means they can't safely express themselves,” he said. “So instead, we allow people to present a truly authentic version of themselves that doesn't rely on real world identifiers like race, name, location, etc.”

Stewart says labeling bots ties into the larger goal of supporting and making space for a spectrum of voices on the platform. “So how does verification impact public conversation? What are the voices that people want to hear from, and how can we make sure that those forces are balanced and equitable?” he said.

But Stewart quickly clarifies that bot labels are not the same thing as the blue verified check marks, nor are they endorsements. “We're not trying to say ‘this bot is good’ in a quality sense, because that is really subjective. We’re just trying to say that this is an automated account that we don't think is doing any harm, and that the owner wants to be honest with you—let you know that it's automated,” he said. “No one should be going around telling you who to trust and who not to trust. Our goal is to give people the tools to make those decisions for themselves.”

The response from developers in this initial phase has been overall quite positive, said Clemons, who noted that you can expect to see more Automated Account labels rolling out early next year. Any developer who wants to create a bot would self-identify it as an automated account and link to their personal Twitter handle in its profile. “Ultimately you get at the bad bots by solving for the good ones,” she said. “And so that's really the long-tail strategy here.”

This post is unavailable
This post is unavailable.