Company

Partnering with researchers at UC Berkeley to improve the use of ML

By and
Tuesday, 29 January 2019

At Twitter, our purpose is to serve public conversation around the world. With world leaders, journalists, celebrities on the platform, we serve conversations that are influential and leave a lasting impact on society. It is important to us to help increase the collective health, openness, and civility of public conversation.

Machine Learning plays a key role in powering Twitter. From onboarding users on the platform to preparing their timeline and everything in between, a multitude of ML models help power the experience. Thus, making Twitter more healthy requires making the way we practice ML more fair, accountable and transparent.

Studying the societal impact of machine learning is a growing area of research in which Twitter has been participating. We are a proud sponsor of the ACM FAT* 2019 conference. But we feel that this is just a start and and there is a lot more work ahead of us from both a research and a practical standpoint. We owe it to our users and society at large to improve in this area.

Today we are proud to share a significant step in this direction -- we are partnering with researchers at UC Berkeley to establish a new research initiative focused on studying and improving the performance of ML in social systems (such as Twitter). The initiative will be lead by Professor Moritz Hardt and Professor Ben Recht.

The team at UC Berkeley will closely collaborate with a corresponding team inside Twitter. As a company, Twitter is able to bring data and real-world insights to the table, but by partnering with UC Berkeley we can create a research program that has the right mix of fundamental and applied research components to make a real practical impact across industry.

Today, the consequences of exposing algorithmic decisions and machine learning models to hundreds of millions of people are poorly understood. Even less is known about how these algorithms might interact with social dynamics: people might change their behaviour in response to what the algorithms recommend to them, and as a result of this shift in behaviour the algorithm itself might change, creating a potentially self-reinforcing feedback loop. We also know that individuals or groups will seek to game or exploit our algorithms and safeguarding against this is essential.

By bringing together the academic expertise of UC Berkeley with our industry perspective, we are looking to do fundamental work in this nascent space and apply it to improve Twitter.

 

This post is unavailable
This post is unavailable.