Weekend Web Weirdness

On Friday, we successfully deployed a new memcache project as part of our overall work to create a more scalable service. After the deploy, we needed to move a lot of data with minimum impact on service quality. To do this, we put together some code that moves data only as it is requested.

This process kept a minimum on service disruption but did cause Twitter to have a complex conversation with two sets of caches over the weekend and into today. This resulted in some caching issues—namely, the /home timeline cache wasn’t being updated correctly for everyone.

We’re aware of this, we realize that it’s annoying, and we’re meeting today about how to best finish up this project and clean up any remaining bugs. Thanks to everyone who checked in with us on Satisfaction, @replies, and email over the weekend. Overall, completing this memcache project is a big win that will lead to increased stability.

Update: We’re working on this project more today and will be checking in again this evening. Also, we keep folks updated as much as possible on our forum over at Get Satisfaction (in case you’re only visiting this blog for news).

Update: We’ve deployed code that gets Twitter talking to one - and only one - pool
of memcache servers. This eliminates a lot of potential confusion and is progress. But we are still noticing related symptoms so we’re still investigating and making improvements.

Update: Okay, here’s a more satisfying update. We’re catching up on things and backfilling the timelines. Once things are caught up, we should be good—and successfully moved to the new memcaching scheme.