archives

Metaobject protocols: Why we want them and what else they can do

Metaobject protocols: Why we want them and what else they can do

Traditionally, languages have been designed to be viewed as black box abstractions; end programmers have no control overthe semantics or implementation of these abstractions. The CLOS MOP on the other hand, "opens up" the CLOS abstraction, and its implementation to the programmer. The programmer can, for example, adjust aspects of the implementation strategy such as instance representation, or aspects of the language semantics such as multiple inheritance behavior. The design of the CLOS MOP is such that this opening up does not expose the programmer to arbitrary details of the implementation, nor does it tie the implementor's hand unnecessarily -- only the essential structure of the implementation is exposed.

In more recent work, we have pushed the metaobject protocol idea in new directions, and we now believe that idea is not limited to its specific incarnation in CLOS, or to object-oriented languages, or to languages with a large runtime, or even to Lisp-like languages. Instead, we believe that providing an open implementation can be advantageous in a wide range of high-level languages and that metaobject protocol technology is a powerful tool for providing that power to the programmer.

Validity Invariants and Effects

Validity Invariants and Effects, Yi Lu, John Potter and Jingling Xue. ECOOP 2007.

Object invariants describe the consistency of object states, and are crucial for reasoning about the correctness of object-oriented programs. However, reasoning about object invariants in the presence of object abstraction and encapsulation, arbitrary object aliasing and re-entrant method calls, is difficult. We present a general framework for reasoning about object invariants based on a behavioural abstraction that specifies two sets---the validity invariant, representing objects that must be valid before and after the behaviour, and the validity effect, describing objects that may be invalidated during the behaviour. The overlap of these two sets is critical because it captures precisely those objects that need to be re-validated at the end of the behaviour. When there is no overlap, no further validity checking is required. We also present a type system based on this framework. This system uses ownership types to confine dependencies for object invariants, and restricts permissible updates to track where object invariants hold even in the presence of re-entrant calls, but otherwise object referenceability and read access are not restricted.

I really liked this paper, but I think it might need a few preliminary explanations. There's a style of verification of OO programs based on "object invariants", which is the idea that you ensure that each object has an invariant, which every method maintains. Then verification is local, in the sense that you can verify each class's invariants independently. (This is used in the Boogie methodology used by Spec#, for instance.)

However, there are a couple of wrinkles. First, aliasing: every object's invariant depends on some of the objects in its fields, and you don't want random aliases letting strangers modify your representation objects underneath your feet. So you introduce a notion of ownership to help track who has permission to mess with each object. Second, reentrancy: suppose the middle of a method body has temporarily broken the object's invariant, and you call another method on the object? You don't a priori know the call is safe.

The type system the authors have introduced here tracks ownership and possibly-dangerous reentrant calls. The really clever part is that instead of just rejecting programs that fail these checks, they log all of the places where things break. So instead of saying "yes" or "no", the type system says "yes" or "manually verify the following things". So it's a labor-saving device for a verification methodology.