The Twitter Engineering Blog

Information from Twitter's engineering team about our technology, tools and events.

Distributed learning in Torch

We recently released Autograd for Torch, which greatly simplified our workflow when experimenting with complex deep learning architectures. The Twitter Cortex team is continuously investing in better tooling for manipulating our large datasets, and distributing training processes across machines in our cluster.

Today we’re open-sourcing four components of our training pipeline, so the community using Torch and/or Autograd can simplify their workflows when it comes to parallelizing training, and manipulating large, distributed datasets.

Read more...

Implications of use of multiple controls in an A/B test

Using a second control can be a tempting method of validating experiment results. We explore the statistics underlying usage of a second control, and conclude that this approach is strictly inferior to using a single large control.

Read more...

Visually explore funnels of user activities

We describe our experimental visual analytics approach for funnel analysis, which helps us explore how users interact with the user interfaces and gain new insights for improving user engagement with Twitter.

Read more...

Detecting and avoiding bucket imbalance in A/B tests

Some simple techniques to detect potentially biased implementations of A/B tests.

Read more...

Sunsetting SHA-1

Implementing SHA-256 where we can, and addressing older certificates as needed.

Read more...

How we break things at Twitter: failure testing

The design, architecture and implementation of Twitter’s failure testing framework.

Read more...

Finatra 2.0: the fast, testable Scala services framework that powers Twitter

Introducing Finatra 2.0: a high-performance, scalable, testable framework powering production services at Twitter.

Read more...

Behind the scenes of enhancements to MoPub data

Tags

We’re announcing major infrastructure improvements to the MoPub platform.

Read more...

Evaluating language identification performance

We language-annotated nearly 200k Tweets from 2014 in 68 languages, being careful to select them in a way that allows you to measure recall and precision well in order to evaluate and improve our language identification performance. You can download all the annotated Tweets.

Read more...

Autograd for Torch

Simplifying neural nets with autograd for Torch.

Read more...

Pages