Parallel/Distributed

A note on distributed computing

A note on distributed computing by Samuel C. Kendall, Jim Waldo, Ann Wollrath and Geoff Wyant (1994).

We argue that objects that interact in a distributed system need to be dealt with in ways that are intrinsically different from objects that interact in a single address space. These differences are required because distributed systems require that the programmer be aware of latency, have a different model of memory access, and take into account issues of concurrency and partial failure.

We look at a number of distributed systems that have attempted to paper over the distinction between local and remote objects, and show that such systems fail to support basic requirements of robustness and reliability. These failures have been masked in the past by the small size of the distributed systems that have been built. In the enterprise-wide distributed systems foreseen in the near future, however, such a masking will be impossible.

We conclude by discussing what is required of both systems-level and application-level programmers and designers if one is to take distribution seriously.

This is a classic.

ACM Queuecast: systems on a chip

This one is a podcast and the sound quality isn't great, but some of you might still be interested in listening.

Advances in chip architectures may seem quite esoteric to those working on semantics of high level languages, but they do have an impact on how languages are implemented, and which implementations (and languages) survive and prosper.

The implications SoC architectures have on compilers are discussed briefly at around the 10 minute mark. The implications for application level security, near the 16 minute mark. I think the discussion of security might interest the E folks, and people from the proof-carrying-code world.

E Thesis: Robust Composition

Mark S. Miller's PhD thesis on Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control is now online.

When separately written programs are composed so that they may cooperate, they may instead destructively interfere in unanticipated ways. These hazards limit the scale and functionality of the software systems we can successfully compose. This dissertation presents a framework for enabling those interactions between components needed for the cooperation we intend, while minimizing the hazards of destructive interference.

Great progress on the composition problem has been made within the object paradigm, chiefly in the context of sequential, single-machine programming among benign components. We show how to extend this success to support robust composition of concurrent and potentially malicious components distributed over potentially malicious machines. We present E, a distributed, persistent, secure programming language, and CapDesk, a virus-safe desktop built in E, as embodiments of the techniques we explain.

E rates as a (very) important language for anyone interested in ideas of messaging, distribution and security. The nice thing about a thesis (such as this one and Joe Armstrong's) is that it gives a nice historical account of the related work and influences.

Transactional Memory with data invariants (draft sequel to the STM-Haskell paper)

Transactional memory with data invariants
From the abstract:
This paper introduces a mechanism for asserting invariants that are maintained by a program that uses atomic memory transactions.
The idea is simple: the programmer write always E where E is an expression that should be preserved by every atomic update for the remainder of the program's execution. We have extended STM Haskell to dynamically check always statements atomically with the user's updates: the result is that we can identify precisely which update is the first one to break an invariant.
This seems connected to Typed Contracts for Functional Programming by Ralf Hinze, Johan Jeuring, and Andres Löh (noticed on the blog of Dominic Fox).
Maybe this year design-by-contract is the hot subject?

I haven't gotten far enough into either of these papers to have much opinion, but the motivational paragraph at the beginning of the Typed Functional Contracts paper grabbed my attention instantly, and I know I want more STM in my applications, so I look forward to a few enjoyable hours.

ACM Queue: A Conversation with Steve Ross-Talbot

Steve Ross-Talbot has more than 20 years of experience leveraging cutting-edge research and applying it to real business problems. Recently he founded Pi4 Technologies where he and his team draw on the field of the pi-calculus to improve the ability to design, automate, and analyze business processes.

The interview goes into more detail than you might expect,

Surely, we should be able to prove something about interaction. I started looking at other papers that leveraged the work of Robin Milner and the pi-calculus and found some fundamental work by Vasco Vasconcelos and Kohei Honda. Their work looked at something called session type...

Hundreds of Impossibility Results for Distributed Computing

Hundreds of Impossibility Results for Distributed Computing
This paper is not as sad as its title impies: it surveys what assumptions and resources are necessary to achieve some specific results in distributed computing. Or if you insist on being pessimistic: what results are impossible without certain assumptions and resources.

We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, fault-tolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity.
These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing. There is a strong emphasis in our presentation on explaining the wide variety of techniques that are used to obtain the results described.

Looks to be helpful for a strategic planning of a distributed PL design and implementation.

PiDuce

PiDuce is a concurrent, distributed language intended for experimenting emerging Web Services technologies. PiDuce can be used as a target language for compilers and processors of business languages such as BizTalk and BPel...

PiDuce builds on solid theoretical foundations: it integrates the communication primitives of the Pi calculus, the synchronization patterns of the Join calculus, and an expressive type system that extends XML datatypes with first-class channels and that retains a notion of subtyping. PiDuce is
a type-safe language: well-typed process cannot fail.

PiDuce is implemented in C#. Source code and binaries are available for download.

Haskell vs. Erlang, Reloaded

The goal of my project was to be able to thoroughly test a poker server using poker bots. Each poker bot was to to excercise different parts of the server by talking the poker protocol consisting of 150+ binary messages. The poker server itself is written in C++ and runs on Windows....

This app is all about binary IO, thousands of threads/processes and easy serialization. All I ever wanted to do was send packets back and forth, analyze them and have thousands of poker bots running on my machine doing same. Lofty but simple goal :-). Little did I know!

Erlang and Haskell compared... Want to know the conclusion?

I was able to finish the Erlang version 10 times faster and with 1/2 the code. Even if I cut the 10-11 weeks spent on the Haskell version in half to account for the learning curve, I would still come out way ahead with Erlang.

I am sure you'll find a lot to disagree with in this article...

Constraint Programming

Constraint Programming

I will not quote this introduction/manifesto/historical overview, as every page of it is worth reading.

It is not only a nice introduction into a promising field, but also a demonstration of how language design issues can be (to some extent) separated from high-level fundamental intuitions.

It is also quite interesting to follow the historical lines of the paper, it reads like an epic!

Ah, and by the way, that's the same constraint programming that underlies Oz.

XML feed