archives

Hundreds of Impossibility Results for Distributed Computing

Hundreds of Impossibility Results for Distributed Computing
This paper is not as sad as its title impies: it surveys what assumptions and resources are necessary to achieve some specific results in distributed computing. Or if you insist on being pessimistic: what results are impossible without certain assumptions and resources.

We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, fault-tolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity.
These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing. There is a strong emphasis in our presentation on explaining the wide variety of techniques that are used to obtain the results described.

Looks to be helpful for a strategic planning of a distributed PL design and implementation.

Rich resource site for the programming language "K"

I don't remember how I stumbled upon this, but I've been interested in K for quite some time but could not find much in way of information and examples (outside of the Kx Systems commercial site).

Effects on stability/exception handling of massively parallel programs

I was wondering if any research had been done on the effects of massively parallel programs on reliability and/or exception handling. Intuitively, it seems that a massively parallel program is automatically more fault-tolerant, simply because errors are necessarily localized, and thus would cause a resource starvation as opposed to a fault that would crash the system.

Anyway, I wasn't sure if anyone had done any papers on it, if my intuitions were correct, or if they were way off base.