Home
FAQ
Feedback
Departments
Discussions
(new topic)
Language Evaluation
PL Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
|
|
|
|
Parallel/Distributed
RT++ Higher Order Threads for C++ |
Its features include a type-safe functional thread interface, lazy thread creation, garbage-collected types (lists, arrays, pointer structures) and controlled non-determinism (thread bags). Threads are first-order objects that can be used like any other objects and that are automatically reclaimed when they are not referenced any more. The package has been ported to numerous types of mono-processors and shared memory multiprocessors and can be easily embedded into existing application frameworks.
Comparisons with other thread libraries on the intro page.
Posted to Parallel/Distributed by Mark Evans on 6/3/04; 7:43:05 PM
Discuss (1 response)
|
|
POOMA |
POOMA is a high-performance C++ toolkit for parallel scientific computation. POOMA's object-oriented design facilitates rapid application development. POOMA has been optimized to take full advantage of massively parallel machines. POOMA is available free of charge in order to facilitate its use in both industrial and research environments, and has been used extensively by the Los Alamos National Laboratory.
Posted to Parallel/Distributed by Mark Evans on 4/25/04; 8:34:56 PM
Discuss (3 responses)
|
|
Cilk |
Cilk is a language for multithreaded parallel programming based on ANSI C. Cilk is designed for general-purpose parallel programming, but it is especially effective for exploiting dynamic, highly asynchronous parallelism, which can be difficult to write in data-parallel or message-passing style. Using Cilk, our group has developed three world-class chess programs, StarTech, *Socrates, and Cilkchess. Cilk provides an effective platform for programming dense and sparse numerical algorithms, such as matrix factorization and N-body simulations, and we are working on other types of applications. Unlike many other multithreaded programming systems, Cilk is algorithmic, in that the runtime system employs a scheduler that allows the performance of programs to be estimated accurately based on abstract complexity measures.
Posted to Parallel/Distributed by Mark Evans on 3/30/04; 2:34:46 PM
Discuss (1 response)
|
|
Implementing Distributed Systems Using Linear Naming |
This (beautifully written) thesis describes a novel way to represent programs for automatic distributed execution. There is a research group carrying on the work. I believe at least one member of the group is an LtU reader.
How is the work going? Are there any implementations available? Where does it fit in with other research on automated parallel execution, and what is the current state of that whole area anyway?
Posted to Parallel/Distributed by Luke Gorrie on 3/26/04; 8:43:45 AM
Discuss (2 responses)
|
|
distcc: a fast, free distributed C/C++ compiler |
distcc is a program to distribute builds of C, C++, Objective C or Objective C++ code across several machines on a network. distcc should always generate the same results as a local build, is simple to install and use, and is often two or more times faster than a local compile.
distcc does not require all machines to share a filesystem, have synchronized clocks, or to have the same libraries or header files installed. They can even have different processors or operating systems, if cross-compilers are installed.
Let's get our new Parallel/Distributed department off on a solid practical footing.
distcc is the software-tools paradigm applied to distributed processing. You have a tool for compiling a source file (gcc ). You also have a tool for coordinating compilation of multiple files, with dependency-aware parallelism (make ). Throw in a distributed front-end to the compiler (distcc ) and suddenly you have a perfectly integrated distributed compile farm. Truly, the only question is: why didn't somebody do this years ago?
Posted to Parallel/Distributed by Luke Gorrie on 3/26/04; 8:24:46 AM
Discuss (1 response)
|
|
|