Friday, October 22, 2010 | By Britt Selvitelle (@bs) [21:48 UTC]
Here at Twitter, we make things. Over the last five weeks, we’ve launched the new Twitter and made significant changes to the technology behind Twitter.com, deployed a new backend for search, and refined the algorithm for trending topics to make them more real-time.
Wednesday, October 6, 2010 | By Michael Busch (@michibusch) [21:24 UTC]
If we have done a good job then most of you shouldn’t have noticed that we launched a new backend for search on twitter.com during the last few weeks! One of our main goals, but also biggest challenges, was a smooth switch from the old architecture to the new one, without any downtime or inconsistencies in search results. Read on to find out what we changed and why.
Monday, September 20, 2010 | By Britt Selvitelle (@bs) [20:30 UTC]
Friday, August 27, 2010 | By Jean-Paul Cozzatti (@jeanpaul) [22:55 UTC]
On my second day at Twitter, I was writing documentation for the systems I was going to work on (to understand them better), and I realized that there was a method in the service’s API that should be exposed but wasn’t. I pointed this out to my engineering mentor, Steve Jenson (@stevej). I expected him to ignore me, or promise to fix it later. Instead, he said, “Oh, you’re right. What are you waiting for?
Wednesday, July 21, 2010 | By Jean-Paul Cozzatti (@jeanpaul) [23:49 UTC]
On Monday, a fault in the database that stores Twitter user records caused problems on both Twitter.com and our API. The short, non-technical explanation is that a mistake led to some problems that we were able to fix without losing any data.
Thursday, July 15, 2010 | By Larry Gadea (@lg) [18:35 UTC]
Twitter has thousands of servers. What makes having boatloads of servers particularly annoying though is that we need to quickly get multiple iterations of code and binaries onto all of them on a regular basis. We used to have a git-based deploy system where we’d just instruct our front-ends to download the latest code from our main git machine and serve that. Unfortunately, once we got past a few hundred servers, things got ugly.
Friday, June 11, 2010 | By Jean-Paul Cozzatti (@jeanpaul) [21:06 UTC]
Since Saturday, Twitter has experienced several incidences of poor site performance and a high number of errors due to one of our internal sub-networks being over-capacity. We’re working hard to address the core issues causing these problems—more on that below—but in the interests of the open exchange of information, wanted to pull back the curtain and give you deeper insight into what happened and how we’re working to address this week’s poor site performance.