Again with the distributed computing: Piccolo

(I saw it on the High Scalability blog, didn't see it on LtU yet.)

Piccolo is cheaper better faster than Hadoop. Quick peek makes me think it is written in C++ and Python.

About
Piccolo is a framework designed to make it easy to develop efficient distributed applications.

In contrast to traditional data-centric models (such as Hadoop) which present the user a single object at a time to operate on, Piccolo exposes a global table interface which is available to all parts of the computation simulataneously. This allows users to specify programs in an intuitive manner very similar to that of writing programs for a single machine.

Piccolo includes a number of optimizations to ensure that using this table interface is not just easy, but also fast:

Locality
To ensure locality of execution, tables are explicitly partitioned across machines. User code that interacts with the tables can specify a locality preference: this ensures that the code is executed locally with the data it is accessing.

Load-balancing
Not all load is created equal - often some partition of a computation will take much longer then others. Waiting idly for this task to finish wastes valuable time and resources. To address this Piccolo can migrate tasks away from busy machines to take advantage of otherwise idle workers, all while preserving the locality preferences and the correctness of the program.

Failure Handling
Machines failures are inevitable, and generally occur when you're at the most critical time in your computation. Piccolo makes checkpointing and restoration easy and fast, allowing for quick recovery in case of failures.

Synchronization
Managing the correct synchronization and update across a distributed system can be complicated and slow. Piccolo addresses this by allowing users to defer synchronization logic to the system. Instead of explicitly locking tables in order to perform updates, users can attach accumulation functions to a table: these are used automatically by the framework to correctly combine concurrent updates to a table entry.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Any comments? It seems to

Any comments? It seems to me, from reading the text quoted above, that a lot of manual work is required...

It looks interesting, and

It looks interesting, and definitely shows some strong connections to Bloom. They do delegate quite a bit to the programmer however, to recover handle certain failure modes and to resolve conflicts (Section 2.2).

In particular, the developer must manually specify the number of partitions and ensure that the partitions can fit entirely in memory on their host machines. It's not clear if by memory they mean RAM, which seems a tad onerous, or on disk. I suspect the former.

I'm also a bit skeptical of checkpointing, but I have to go through it a bit more carefully to determine whether I'm just misunderstanding something.