Interviews

Twitter's role in supporting the next generation of social citizens

By
Tuesday, 21 September 2021

When college senior Zoe Kurtz first joined Twitter in 2013, she used the app to share pictures of her adventures as a North Carolina teen, send birthday wishes to her friends, and generally express herself as the cheerful, friendly young person she still is today. 

But these days Zoe, who is currently finishing her final year at Elon University as a strategic communications major, is largely silent on Twitter. 

“That shifted once I got to college, I think due to the increase of cancel culture and fear of saying the wrong thing,” she told us, even though she still checks Twitter every day for the latest news and trending topics. 

As an intern with The Social Institute, an education company that teaches young people how to navigate a social media world, she has seen first-hand how impactful these lessons can be in helping kids exercise their voice in authentic, safe, and productive ways.

Given Zoe's interest in social media's role in young people's lives, we decided to pair her with a product manager who is developing the tools and products that are helping to create more positive experiences for everyone on the platform. We introduced her to Jarrod Doherty, the product lead for the health user experience at Twitter. He works with the team that is building the infrastructure to develop better health experiences. 

One of the big projects his team is working on is called Safety Mode, an opt-in experience that lifts the burden off of people when dealing with abuse. Below, he explains more about how Safety Mode will work to protect customers from abusive or spammy replies and large-scale attacks, known as dog piling. Twitter rolled out its first test at the end of August with select partners who have experienced this type of harassment, including female journalists and those from marginalized communities.

Without further ado, Zoe and Jarrod, take it away. (This Q&A has been edited for length and clarity.)

Zoe: Thank you so much again for giving me the opportunity to interview you. I'm really excited. Can you tell me a little bit about your day-to-day life as a Twitter health product manager?

Jarrod: Every day can be pretty different depending on the state of the projects we're working on and what phase — whether we're in a discovery phase and iteration phase, or we're actually building a specific feature solution. 

One of the big things that I do on a weekly basis is building relationships across the company with a bunch of other teams because health is a space that really has to touch all areas of the product. I'm constantly talking with other teams to share what we're working on and the capabilities that we're building, and how they can be leveraged to drive positive outcomes across the entire product.

Zoe: Could you tell me a bit about Safety Mode? Who is it designed for and have there been any early successes that you could tell me about?

Jarrod: Safety Mode is a project that started a while back based on feedback from Twitter customers. They appreciated having safety controls, like blocking, to help manage their experience. But what we were hearing is that when the abuse or spam really scaled up or they were given unexpected attention — maybe from a user with a bigger following or some sort of coordinated campaign to focus on them — it was really hard to scale mute and block. 

So to complement our current safety tools, we wanted to design a more proactive approach that could help filter out some of the worst and shield you from that abuse, so you could continue to use the platform even when receiving unwanted attention. That's a behavior that some people call dog piling. Safety Mode is a way that we can address that directly. 

Zoe: I'm sure that will be useful, especially with last year, more people have been on their phones. That brings me to my next question, has reporting content gone up or down over time? And what does that process look like when communicating with violators?

Jarrod: Reporting definitely ebbs and flows a bit with what's going on in the world, what we call exogenous factors. We have a transparency report that we update twice a year that provides the number of reports that we receive and the actions that we take against accounts that violate our Terms of Service. What we've seen over the last two years, it's definitely increased. But now we're seeing that start to plateau off and start to go back down due to the natural ebb and flow of what's going on in the world. 

Our approach in terms of how we communicate with violators when we confirm that something's going on is we first need to educate them about the rules and the norms of the platform. We don't want to make an assumption that people are purposefully breaking the rules — that's based on a lot of user research that a majority of violations are from first-time offenders, who may not have even known that Twitter has rules or didn't understand where the line is. So we take that approach first to inform them of the norms and the rules, and then obviously keep an eye on them to see if they violate again, to escalate them.

This post is unavailable
This post is unavailable.

Zoe: I used to be really active on Twitter — I got my account in 2013, and I used to post all the time, especially in middle school and high school. Now I use it mainly for reading the news and staying up to date on social trends. How does Twitter envision engagement with young people? 

Jarrod: Twitter's mission has always revolved around what's happening right now. But we definitely want to focus on ways to bring people together to discuss common shared topics and in a pro-social way. Spaces has been a good example of how we can bring people together to discuss these topics in a very here-and-now way. 

We want to continue to look into different ways to encourage the formation of those types of communities around a shared topic. I don't work directly on those features, but the health team touches all aspects of the product, and we definitely help review some of those features and ensure that they're being designed with safety in mind.

Zoe: What do you think are some ways younger generations who are growing up in a very digital world can become good digital citizens on Twitter?

Jarrod: One of the most important things is to remember to have empathy. Realize that there's always someone else on the other side of the screen before you hit send. And then just making sure you're supporting those who appear that they need the help or maybe have gone quiet. Don't be afraid to use safety controls like block and mute — as well as the upcoming Safety Mode to customize the experience you have on the platform when you're not having that good time and that experience you want. 

I also love that Twitter supports programs like Digital4Good through I Can Help. It's a program that encourages and celebrates young people who've leveraged social media or technology to have a pro-social impact. It's something that I participated in as a mentor a couple of times, and it's always amazing to see some of the new and innovative ways young people are leveraging social media to have really good community impacts.

Zoe: That's awesome. For me, I love the community aspect of Twitter. That brings me to the next question, which is that cancel culture is at an all-time high and students are constantly seeing celebrities and others online being ostracized for past remarks or saying the wrong thing. What is Twitter doing to help students feel empowered for using their voice on Twitter, but to not feel the pressure of cancel culture and fear of judgment from others?

Jarrod: That's a really tough question. Twitter is meant to be a place for public conversation. And so, we want to ensure that our Twitter customers are enabled to share valid criticisms or demand accountability when there is some harm that has occurred. But we also want to do our best to protect people when that crosses the line into harassment or dogpiling. 

How we're looking at that from a platform perspective is by giving people the proper tools to give you much more control over your old Tweets, potentially archiving those,  and better controls over who can see or interact with what you post via audience controls or reply settings. As well as more control over how you're mentioned by other users and potentially even being able to unmention yourself from conversations you don't want to be roped into. And of course, building many more of these ways for individuals to customize their experience on the platform through tools like Safety Mode.

Zoe: That's super interesting. I like the ability to be unmentioned. So my last question is, in a world where there is so much fake news and opinion online, what is Twitter doing to make users feel as though they're getting accurate information, while also not limiting free speech?

Jarrod: Twitter has recently rolled out a new way that we label and add context to posts that we feel may be sharing misinformation. We're also working on a more community-driven feature called Birdwatch, which allows people to identify potentially misinformation at a community level and write notes that provide more context or links to studies or other information that may show that this is misleading or incorrect. 

We believe that approach has the potential to enable a faster response through being community-driven. You have knowledgeable people in these areas who can weigh in and build their reputation as trusted members of the community and help address misinformation directly, and perhaps build trust across audiences who don’t trust any one organization or company to make determinations on what is true or false.It's a way to close that trust gap and increase the speed and scale of contextualization.

Zoe: I actually have another question. Are there any specific Safety Mode features that may be more relevant to those of younger ages on Twitter?

Jarrod: One of the interesting bits of feedback that we've heard from younger people  on Twitter — those who may still be in high school or college — is some reluctance around using the block mechanism to control their experience. It’s for fear of additional repercussions or blocking someone leading to additional harassment from their followers. So that's one of the core protections that we really wanted to address in Safety Mode. We've even considered the trade-offs between should Safety Mode do an automatic block or an automatic mute. What is the value of that blocking being discovered by the violator? Our current hypothesis on that is if we just do auto-muting, there's no feedback that goes back to the person who made the original comment to help them learn and grow as a person and understand what the norms of the platform are.

The trade-off there though is that it becomes obvious when you're in Safety Mode; people can tell. And so we're addressing that and how we're mitigating that risk is a new profile experience: When you view the account of someone who you've interacted with in the past that led to an auto-block, it's very clear that Twitter and not that person themselves blocked you, so you can understand it wasn't a personal decision. That was Twitter's perspective on what's happening, not that person. So if you're going to be mad at anyone, be mad at Twitter. 

Zoe: Yeah. I like that.

 

 

This post is unavailable
This post is unavailable.