Every day, people come to Twitter to find out what’s happening in the world and talk about it. With hundreds of millions of Tweets sent every day, it is critical that our infrastructure and data platforms are able to scale.
As we have previously discussed, the Hadoop compute system is the core of our data platform, and Twitter runs multiple large Hadoop clusters that are among the biggest in the world. In fact, our Hadoop file systems host more than 300PB of data across tens of thousands of servers. Over the past few years, we have been assessing our platform and infrastructure needs to make sure we are well positioned to keep up with the growing needs of our service.
Today, we are excited to announce that we are working with Google Cloud to move cold data storage and our flexible compute Hadoop clusters to Google Cloud Platform. This will enable us to enhance the experience and productivity of our engineering teams working with our data platform.
There is strong alignment with Twitter’s engineering strategy to meet the demands of its platform and the services Google Cloud offers at a global scale. Google Cloud Platform’s data solutions and trusted infrastructure will provide Twitter with the technical flexibility and consistency that its platform requires, and we look forward to an ongoing technical collaboration with their team.
This migration, when complete, will enable faster capacity provisioning; increased flexibility; access to a broader ecosystem of tools and services; improvements to security; and enhanced disaster recovery capabilities. Architecturally, we will also be able to separate compute and storage for this class of Hadoop workloads, which has a number of long-term scaling and operational benefits.
We are excited about what our new collaboration with Google Cloud will mean not only for our engineering teams but for everyone on Twitter.
Did someone say … cookies?