The Twitter Engineering Blog

Information from Twitter's engineering team about our technology, tools and events.

Results from Engineering for: 2010

Hack Week

Here at Twitter, we make things. Over the last five weeks, we’ve launched the new Twitter and made significant changes to the technology behind, deployed a new backend for search, and refined the algorithm for trending topics to make them more real-time.


Twitter's New Search Architecture

If we have done a good job then most of you shouldn’t have noticed that we launched a new backend for search on during the last few weeks! One of our main goals, but also biggest challenges, was a smooth switch from the old architecture to the new one, without any downtime or inconsistencies in search results. Read on to find out what we changed and why.


Tool Legit

Hi, I’m @stirman, and I’m a tool.

Well, I build tools, along with @jacobthornton, @gbuyitjames and @sm, the Internal Tools team here at Twitter.


The Tech Behind the New

The redesign presented an opportunity to make bold changes to the underlying technology of the website. With this in mind, we began implementing a new architecture almost entirely in JavaScript. We put special emphasis on ease of development, extensibility, and performance. Building the application on the client forced us to come up with unique solutions to bring our product to life, a few of which we’d like to highlight in this overview.


My Awesome Summer Internship at Twitter

On my second day at Twitter, I was writing documentation for the systems I was going to work on (to understand them better), and I realized that there was a method in the service’s API that should be exposed but wasn’t. I pointed this out to my engineering mentor, Steve Jenson (@stevej). I expected him to ignore me, or promise to fix it later. Instead, he said, “Oh, you’re right. What are you waiting for?


Twitter & Performance: An update

On Monday, a fault in the database that stores Twitter user records caused problems on both and our API. The short, non-technical explanation is that a mistake led to some problems that we were able to fix without losing any data.


Room to grow: a Twitter data center

Later this year, Twitter is moving our technical operations infrastructure into a new, custom-built data center in the Salt Lake City area. We’re excited about the move for several reasons.


Murder: Fast datacenter code deploys using BitTorrent

Twitter has thousands of servers. What makes having boatloads of servers particularly annoying though is that we need to quickly get multiple iterations of code and binaries onto all of them on a regular basis. We used to have a git-based deploy system where we’d just instruct our front-ends to download the latest code from our main git machine and serve that. Unfortunately, once we got past a few hundred servers, things got ugly.


Cassandra at Twitter Today

In the past year, we’ve been working with the Apache Cassandra open source distributed database. Much of our work there has been out in the open, since we’re big proponents of open source software. Unfortunately, lately we’ve been less involved in the community because of more pressing concerns and have created some misunderstandings.


A Perfect Storm.....of Whales

Since Saturday, Twitter has experienced several incidences of poor site performance and a high number of errors due to one of our internal sub-networks being over-capacity. We’re working hard to address the core issues causing these problems—more on that below—but in the interests of the open exchange of information, wanted to pull back the curtain and give you deeper insight into what happened and how we’re working to address this week’s poor site performance.