Stories

Twitter’s new reporting process centers on a human-first design

By
Tuesday, 7 December 2021

If you’ve ever reported a Tweet, you may have been frustrated by this process, which is a typical one online. First, you report an item, then a list of violations pops up — which one is it? Is it suspicious? Abusive? Both? You tick one of the boxes. 

Twitter receives millions of reports; everything from misinformation and spam, to harassment and hate speech, it’s their way of telling Twitter, hey this isn’t right or I don’t feel safe. Based on user feedback, research, and an understanding that today’s reporting process wasn’t making enough people feel safe or heard, the company decided to do something about it. 

That's why Twitter is testing an overhauled reporting process that will make it easier for people to alert them of harmful behavior.  

The new approach, which is currently being tested with a small group in the US, simplifies the reporting process. It lifts the burden from the individual to be the one who has to interpret the violation at hand. Instead it asks them what happened.

This method is called symptoms-first, where Twitter first asks the person what's going on. Here’s the analogy the team uses: say you're in the midst of an emergency medical situation. If you break your leg, the doctor doesn’t say, is your leg broken? They say, where does it hurt? The idea is, first let’s try to find out what’s happening instead of asking you to diagnose the issue. 

This post is unavailable
This post is unavailable.

“In moments of urgency, people need to be heard and feel supported. Asking them to open the medical dictionary and saying, ‘point to the one thing that's your problem’ is something people aren't going to do,” said Brian Waismeyer, a data scientist on the health user experience team that spearheaded this new process. “If they're walking in to get help, what they're going to do well is describe what is happening to them in the moment.”

When people are motivated to report something, chances are, they’ve just experienced or witnessed something unsettling, which is a difficult time to ask them to figure out exactly which policy might have been violated. In some cases the reported Tweet didn’t exactly break a rule, but bent it.

“What can be frustrating and complex about reporting is that we enforce based on terms of service violations as defined by the Twitter Rules,” said Renna Al-Yassini, Senior UX Manager on the team. “The vast majority of what people are reporting on fall within a much larger gray spectrum that don't meet the specific criteria of Twitter violations, but they're still reporting what they are experiencing as deeply problematic and highly upsetting.”

By refocusing on the experience of the person reporting the Tweet, Twitter hopes to improve the quality of the reports they get. The more first-hand information they can gather about how people are experiencing certain content, the more precise Twitter can be when it comes to addressing it or ultimately removing it. This rich pool of information, even if the Tweets in question don't technically violate any rules, still gives Twitter valuable input that they can use to improve people’s experience on the platform. 

What makes this approach so unique

Encouraging people to share what might be upsetting them is a skill that Lena Emara, a product designer on the team, has been practicing throughout her career. Emara worked as a clinical therapist before joining Twitter. As part of the design phase of this project, she worked collaboratively with her design teammates, researchers, and even writers, whose thoughtfulness and level of detail help to acknowledge the symptoms of the problem they’re witnessing or may be experiencing without retraumatizing them; two concepts championed by colleagues Ruben Gomez, lead UX Researcher, and Danielle Small, lead Content Designer on the team. 

“As a clinical therapist, I had to think, if I'm speaking to a patient who has experienced psychological trauma, how am I going to converse with them? How am I going to ask them questions that will enable them to make sense of and share their experience?,” said Emara.

A typical writing directive for an online reporting feature would be, how do you get the customer from point A to point B. So this shift to be more conversational is a huge departure, said Small. “It's very much a conversational, personal feel. We want it to feel like we’re intentional about using very digestible language,” they said. “It’s really about meeting people where they are.”

At every stage of Twitter's research and design process, the team intentionally included people from marginalized communities — women, people of color, and people from the LGBTQ+ community, including those who identify as trans or nonbinary. The theory is, if you design for the outliers, you actually solve for the majority. In this case, Twitter designed with them in mind because they also happen to be some of the platform's most engaged users.    

"We really got to empathize and learn what their experiences were when filling out a report," said Gomez. "Those insights were important for our cross-functional teams to understand what they're experiencing."

The cross-functional team itself is also nearly entirely comprised of people from these marginalized communities who reflect some of the experiences of the people they're tasked with serving.

Twitter will be able to use the feedback it gains from this new process to improve it and help more people. It will help them understand better what’s happening on the platform, and even what’s happening in the outside world, in society. And to connect those learnings to enhance their policies.  

“This is why it's so revolutionary — you have all these opportunities to connect with people individually," said Emara. “This is how we're going to understand how you define your experiences on Twitter.”

What the new process looks like in action

Once the person reporting a violation describes what happened, Twitter then presents them with the Terms of Service violation they think might have occurred, at which point Twitter asks: Is that right? If not, the person can say so, which will help signal to Twitter that there are still some gaps in the reporting system. 

All the while Twitter is gathering feedback and compiling learnings from this chain of events that will help them fine tune the process and connect symptoms to actual policies. Ultimately, it helps Twitter take appropriate action.  

“This report essentially triggers a review of the content. If Twitter determines that the content is violating and our rules dictate that the content be removed, that will happen,” said Fay Johnson, Director of Product Management on the Health team. “We'll do some additional investigation to see if there are other things that we need to take down based on what was reported, whether it be the content itself or an account.”  

Come next year, as the new process begins to roll out to a wider audience, Twitter will be working on improving its communication process, ensuring that it's closing the loop with those who are taking the time to report. It’s a worthwhile investment because by asking people to describe what’s happening to them, as opposed to just ticking a box, Twitter gets the added benefit of collecting rich feedback that allows it to identify concerns that perhaps had not been on its radar. 

“It helps us address unknown unknowns,” said Johnson. She and fellow product managers Anastasia Konecky and Jarrod Doherty are leading the rollout for the new process. 

“Obviously we want to have rules that help keep everyone safe while balancing freedom of speech and promoting the public conversation. We also want to make sure that if there are new issues that are emerging — ones that we may not have rules for yet — there is a method for us to learn about them,” said Johnson. “The intention of these reporting flows is to empower the customer, give Twitter actionable information that we can use to improve the product and our experiences, and also improve our trust and safety process overall.”

This post is unavailable
This post is unavailable.