Implementation

A Language-Based Approach to Unifying Events and Threads

A Language-Based Approach to Unifying Events and Threads

This paper presents a language based technique to unify two seemingly opposite programming models for building massively concurrent network services: the event driven model and the multithreaded model. The result is a unified concurrency model providing both thread abstractions and event abstractions.

The implementation uses the CPS monad in such a way that the final result is a trace, that is, an ordered sequence of function calls. Threading is part of the basic monad implementation, and a scheduler is as simple as a tree traversal function over a queue of traces. Once you have a scheduler, events are obvious.

This is quite elegant, I'll start using it for my own applications as soon as I get hold of the source.

Build your own scripting language for Java

JavaWorld has An introduction to JSR 223 for those that want to roll their own and target the JVM:

The upcoming Java Standard Edition 6.0 release will include an implementation of Java Specification Request 223, Scripting for the Java Platform. This JSR is about programming languages and their integration with Java. This article demonstrates the power and potential of JSR 223 through the implementation of a simple Boolean language.

Past LtU references to JSR 223: The original announcement in Sun, Zend push scripting for Java and a brief mention in the thread on Embedded Languages in Java.

Deconstructing Process Isolation

Deconstructing Process Isolation.
Mark Aiken; Manuel Fahndrich; Chris Hawblitzel; Galen Hunt; James R. Larus. April 2006

Most operating systems enforce process isolation through hardware protection mechanisms such as memory segmentation, page mapping, and differentiated user and kernel instructions. Singularity is a new operating system that uses software mechanisms to enforce process isolation. A software isolated process (SIP) is a process whose boundaries are established by language safety rules and enforced by static type checking. With proper system support, SIPs can provide a low cost isolation mechanism that provides failure isolation and fast inter-process communication. To compare the performance of Singularity’s approach against more conventional systems, we implemented an optional hardware isolation mechanism. Protect domains are hardware-enforced address spaces, which can contain one or more SIPs. Domains can either run at the kernel’s privilege levels and share an exchange heap or be fully isolated from the kernel and run at the normal application privilege level. These domains can construct Singularity configurations that are similar to micro-kernel and monolithic kernel systems.

The paper concludes that hardware-based isolation incurs performance costs of up to 25-33%, while the lower cost of SIPs permits them to provide protection and failure isolation at a finer granularity than conventional processes.

Maybe it's time to revist the language-as-os theme...

Transactional Memory with data invariants (draft sequel to the STM-Haskell paper)

Transactional memory with data invariants
From the abstract:
This paper introduces a mechanism for asserting invariants that are maintained by a program that uses atomic memory transactions.
The idea is simple: the programmer write always E where E is an expression that should be preserved by every atomic update for the remainder of the program's execution. We have extended STM Haskell to dynamically check always statements atomically with the user's updates: the result is that we can identify precisely which update is the first one to break an invariant.
This seems connected to Typed Contracts for Functional Programming by Ralf Hinze, Johan Jeuring, and Andres Löh (noticed on the blog of Dominic Fox).
Maybe this year design-by-contract is the hot subject?

I haven't gotten far enough into either of these papers to have much opinion, but the motivational paragraph at the beginning of the Typed Functional Contracts paper grabbed my attention instantly, and I know I want more STM in my applications, so I look forward to a few enjoyable hours.

A Tail-Recursive Machine with Stack Inspection

A Tail-Recursive Machine with Stack Inspection. John Clements and Matthias Felleisen, TOPLAS 2004.

Security folklore holds that a security mechanism based on stack inspection is incompatible with a global tail call optimization policy... In this article, we prove this widely held belief wrong. ... Our machine is surprisingly simple and suggests that tail calls are as easy to implement in a security setting as they are in a conventional one.

I don't believe we've discussed this paper before, although it was mentioned in this thread. Tail calls have been a topic of discussion here several times. [1][2][3]

Tail call elimination decorator in Python

Features of a programming language, whether syntactic or semantic, are all part of the language's user interface. And a user interface can handle only so much complexity or it becomes unusable. This is also the reason why Python will never have continuations, and even why I'm uninterested in optimizing tail recursion.

Thus spoke Guido - as LtU readers already know.

Now, not even four weeks later, it has become clear that turning tail recursions into iterations can be achieved by an innocent little decorator in pure Python. No Rube Goldberg machine(s) in sight.

Leak Free Javascript Closures

I haven't read this really, but it's in the queue for such a long time I might as well pass it along...

A constraint-based approach to guarded algebraic data types

A constraint-based approach to guarded algebraic data types

We study HMG(X), an extension of the constraint-based type system HM(X) with deep pattern matching, polymorphic recursion, and guarded algebraic data types. Guarded algebraic data types subsume the concepts known in the literature as indexed types, guarded recursive datatype constructors, (first-class) phantom types, and equality qualified types, and are closely related to inductive types. Their characteristic property is to allow every branch of a case construct to be typechecked under different assumptions about the type variables in scope. We prove that HMG(X) is sound and that, provided recursive definitions carry a type annotation, type inference can be reduced to constraint solving. Constraint solving is decidable, at least for some instances of X, but prohibitively expensive. Effective type inference for guarded algebraic data types is left as an issue for future research.

Constraint-based type inference for guarded algebraic data types

Constraint-based type inference for guarded algebraic data types

Guarded algebraic data types subsume the concepts known in the literature as indexed types, guarded recursive datatype constructors, and first-class phantom types, and are closely related to inductive types. They have the distinguishing feature that, when typechecking a function defined by cases, every branch may be checked under different assumptions about the type variables in scope. This mechanism allows exploiting the presence of dynamic tests in the code to produce extra static type information.

We propose an extension of the constraint-based type system HM(X) with deep pattern matching, guarded algebraic data types, and polymorphic recursion. We prove that the type system is sound and that, provided recursive function definitions carry a type annotation, type inference may be reduced to constraint solving. Then, because solving arbitrary constraints is expensive, we further restrict the form of type annotations and prove that this allows producing so-called tractable constraints. Last, in the specific setting of equality, we explain how to solve tractable constraints.

To the best of our knowledge, this is the first generic and comprehensive account of type inference in the presence of guarded algebraic data types.

A New Haskell and those anxious to change

Haskell' (pronounced "Haskell prime") is being formulated while we sleep. While the committee wants to incorporate into the new standard only "tried-and-true language features", a quick glance at the mailing list shows quite a few unimplemented ideas being tossed around.

The same thing happens with the C++ standardization process. Is it a good idea to keep language standardization conservative? Herb Sutter would perhaps argue so, since the export feature in C++98 was so rarely implemented.

So, is conservatism right for C++0x? Is it right for Haskell'?

XML feed