archives

Nested Schedulers: A Tree Is Not Enough?

In a recent discussion, naasking mentioned nested schedulers of Manticore.
Incidentally, I am currently interested in exactly this topic.
I am, however, confused by the fact that most of nested scheduling papers repeatedly stress the point that schedulers form a tree structure.
Indeed, a tree seemed a natural idea initially, but after playing with toy implementations I quickly started to suspect that this is too hard a constraint.

Specifically, in face of synchronization between threads, it looks logical for a blocked thread to sponsor progress of the blocking thread (let's call this "helping", and this differs from "stealing" by cardinality - more than one thread may help a given thread, while only one can steal, and the act of stealing usually transfers the ownership). Yes, this may interfere with cache, and processor affinity, but at least to evaluate negative effects of the feature we have to model it.
As soon as we introduce helping, a tree does not look adequate. In fact, I suspect even a DAG of schedulers may be too restrictive. A cycle in the graph of helping not necessarily means a deadlock, as first, not all helping edges are induced by a block, and second, the sponsored CPU ticks may outflow outside of the cycle and thus further the progress.

To bring this closer to languages: what languages/calculi are known for support of explicit CPU scheduling?
I know that (nested) engines can be implemented in any language with first class continuations, but this approach is far from ideal (e.g., separately compiled or in general non-cooperating code cannot be preempted, as I understand). Same limitations apply to any schema that relies on an "upwards" cooperation (timely yield) - e.g., coroutines.

ML Modules in C#

In building languages using C#, I've often run into typing difficulties. These difficulties are well known, and have been discussed on LTU before, as have the possible solutions.

Since I couldn't wait for the next release which may or may not address these difficulties, I recently came up with a simple source to source translation which allows me to express the required type structure constraints. Basically, the difficulties come down to the fundamental tradeoffs between objects and modules, aka abstract data types. My translation restructures the object into an abstract data type ala ML modules, so the type structure of previously inexpressible functions, such as list flattening, are now easily expressed.

I first used this translation in my recent Orc implementation, and I'd be interested whether anyone has described a similar solution before, or whether there exist any other known solutions to the problem. I'm also curious about the interaction with inheritance, and other extensibility problems anyone might be able to foresee. I plan to explore how this translation also affects C# solutions to The Expression Problem.

Process Algebras: Whats the point?

There seems to be quiet a few parallel/concurrent oriented languages (Erlang) and dialects of languages (Split C, Titanium (java variant), Hi Performance Fortran), and language specific libraries that enable parallelization (MPI, OpenMP, JPPF). With respect to process algebras, what type of analysis do process algebras allow us to do? Simply characterize the different implementations? Provide us a framework to improve the different implementatons? Have process algebras had any impact on real world languages and compilers?

So, process algebra, what have you done for me lately?