At Twitter, Apache Mesos runs on hundreds of production machines and makes it easier to execute jobs that do everything from running services to handling our analytics workload. For those not familiar with it, the Mesos project originally started as a UC Berkeley research effort. It is now being developed at the Apache Software Foundation (ASF), where it just reached its first release inside the Apache Incubator.
Mesos aims to make it easier to build distributed applications and frameworks that share clustered resources like, CPU, RAM or hard disk space. There are Java, Python and C++ APIs for developing new parallel applications. Specifically, you can use Mesos to:
- Run Hadoop, Spark and other frameworks concurrently on a shared pool of nodes
- Run multiple instances of Hadoop on the same cluster to isolate production and experimental jobs, or even multiple versions of Hadoop
- Scale to 10,000s of nodes using fast, event-driven C++ implementation
- Run long-lived services (e.g., Hypertable and HBase) on the same nodes as batch applications and share resources between them
- Build new cluster computing frameworks without reinventing low-level facilities for farming out tasks, and have them coexist with existing ones
- View cluster status and information using a web user interface
Mesos is being used at Conviva, UC Berkeley and UC San Francisco, as well as here. Some of our runtime systems engineers, specifically Benjamin Hindman (@benh), Bill Farner (@wfarner), Vinod Kone (@vinodkone), John Sirois (@johnsirois), Brian Wickman (@wickman), and Sathya Hariesh (@sathya) have worked hard to evolve Mesos and make it useful for our scalable engineering challenges. If you’re interested in Mesos, we invite you to try it out, follow @ApacheMesos, join the mailing list and help us develop a Mesos community within the ASF.
— Chris Aniszczyk, Manager of Open Source (@cra)