User loginNavigation |
Parallel/DistributedJon Udell: The riddle of asynchrony
An amusing podcast about, among other things, the programming model appropriate for SOA, and especially whether is should be synchronous or asynchronous.
You'll feel right at home, since the concepts "abstraction" and "state" raise their head quite early into the discussion... By Ehud Lamm at 2005-09-04 20:03 | Parallel/Distributed | Software Engineering | 3 comments | other blogs | 5708 reads
Concurrent Clustered ProgrammingIt's been a while since the last LtU link to Vijay Saraswat papers. Concurrent Clustered Programming We present the concurrency and distribution primitives of X10, a modern, statically typed, class-based object-oriented (OO) programming language, designed for high productivity programming of scalable applications on high-end machines. The basic move in the X10 programming model is to reify locality through a notion of place, which hosts multiple data items and activities that operate on them. Aggregate objects (such as arrays) may be distributed across multiple places. Activities may dynamically spawn new activities in mulitple places and sequence them through a finish operation that detects termination of activities. Atomicity is obtained through the use of atomic blocks. Activities may repeatedly detect quiescence of a data-dependent collection of (distributed) activities through a notion of clocks, generalizing barriers. Thus X10 has a handful of orthogonal constructs for space, time, sequencing and atomicity. X10 smoothly combines and generalizes the current dominant paradigms for shared memory computing and message passing. We present a bisimulation-based operational semantics for X10 building on the formal semantics for "Middleweight Java''. We establish the central theorem of X10: programs without conditional atomic blocks do not deadlock.To appear in the Proceedings of CONCUR, August 2005. [on edit: added missing link, sorry] By Andris Birkmanis at 2005-07-26 12:09 | Parallel/Distributed | 5 comments | other blogs | 7851 reads
Termite: a Lisp for Distributed Computing
(via Patrick)
In short: take Scheme, remove mutations, add isolated processes with mailboxes, add message sending and receiving operations and an addressing mechanism. Termite is a Lisp for distributed computing (PDF paper and PDF presentation), providing an Erlang like model on top of Scheme. As the presentation says, the powerful abstraction facilities provided by Scheme made impelementing Termite rather easy and the implementation doesn't require much code. [Edit: here's a working link for the PDF paper] By Ehud Lamm at 2005-07-16 12:24 | Functional | Parallel/Distributed | 20 comments | other blogs | 55122 reads
A Theory of Distributed Objects
A Theory of Distributed Objects - Asynchrony - Mobility - Groups -Components. Denis Caromel and Ludovic Henrio.
Distributed and communicating objects are becoming ubiquitous. In Grid and Peer-to-Peer environments, extensive use is made of objects. This book provides a general theory for distributed objects interacting asynchronously, for the sake of efficiency and scalability. Further, it copes with advanced issues such as mobility, groups, and components. Pi-Calculus, Join-Calculus and more... Check out the sample chapter to get a feeling of the writing style. By Ehud Lamm at 2005-07-03 08:59 | Misc Books | OOP | Parallel/Distributed | login or register to post comments | other blogs | 7495 reads
Revisiting coroutines
Taking into account real or imaginary interest to control operators on LtU, I believe this paper makes nice reading. See also Coroutines in Lua. By Andris Birkmanis at 2005-06-27 16:56 | Parallel/Distributed | Semantics | 2 comments | other blogs | 7777 reads
Crystal Scheme: A Language for Massively Parallel Machines
Crystal Scheme: A Language for Massively Parallel Machines
Massively parallel computers are built out of thousands conventional but powerful processors with independent memories. Very simple topologies mainly based on physical neighbourhood link these processors. The paper discusses extensions to the Scheme language in order to master such machines. Allowing arguments of functions to be concurrently evaluated introduces parallelism. Migration across the different processors is achieved through a remote evaluation mechanism. First-class continuations offering some semantical problems with respect to concurrency, we propose a neat semantics for them and then show how to build, in the language itself, advanced concurrent constructs such as futures. Eventually we comment some simulations, with various topologies and migration policies, which enables to appreciate our previous linguistical choices and confirms the viability of the model.Note the year of publication -1991. The bibliography goes back to 1980. My question might be naive, but what are major inventions in the area of And, returning to PLT, can Crystal Scheme be useful for Cell? By Andris Birkmanis at 2005-06-22 07:44 | Parallel/Distributed | 3 comments | other blogs | 8048 reads
Tim Bray: On Threads
A report on Sun's "Chip Multi-Threading" summit.
Not much that is new here, but the trend towards concurrency is one we discussed a few times, and which seemed to attract interest. Bray's comments regarding erlang and other languages are sure to annoy LtU readers. Let's not start a flamefest. Let me just remark that even if you want shared data structures, Java isn't your only option. You should at least check out Ada... (and you can compile Ada to the JVM, if that's what you want to do. let's not confuse language and implementation). Bidirectional fold and scan
Bidirectional fold and scan, O'Donnell ,J.T. In Functional Programming, Glasgow 1993, Springer Workshops in Computing.
Bidirectional fold generalises foldl and foldr to allow simultaneous communication in both directions across a list. Bidirectional scan calculates the list of partial results of a bidirectional fold, just as scanl and scanr calculate the partial results of a foldl or foldr. Mapping scans combine a map with a scan, and often simplify programs using scans. This family of functions is significant because it expresses important patterns of computation that arise repeatedly in circuit design and data parallel programming. The biderctional fold is a bit surprising at first, but the real reason I am posting this is to encourage discussion on the connection between functional and data prallel programming. By Ehud Lamm at 2005-06-05 09:36 | Functional | Parallel/Distributed | 15 comments | other blogs | 10827 reads
Lisp or ErlangSome here will find this discussion on language choice for building an industrial strength poker server to be of interest. Parallel Programming with Matrix Distributed ProcessingMatrix Distributed Processing (MDP) is a C++ library for fast development of efficient parallel algorithms. MDP enables programmers to focus on algorithms, while parallelization is dealt with automatically and transparently. Here we present a brief overview of MDP and examples of applications in Computer Science (Cellular Automata), Engineering (PDE Solver) and Physics (Ising Model). A short tutorial on MDP. MDP provides a distributed programming model. On top of that, it is interesting to see that the library adds what looks like a language constructs (e.g., forallsites). |
Browse archives
Active forum topics |
Recent comments
24 weeks 5 days ago
24 weeks 5 days ago
24 weeks 5 days ago
47 weeks 1 hour ago
51 weeks 1 day ago
1 year 5 days ago
1 year 5 days ago
1 year 3 weeks ago
1 year 8 weeks ago
1 year 8 weeks ago