Theory

'Information and Computation' Open Access

I found this in my mail.

August 12, 2005

The Publisher and Editorial Board of Information and Computation are pleased to announce that for one year, effective immediately, online access to all journal issues back to 1995 will be available without charge. This includes unrestricted downloading of articles in pdf format. Journal articles may be obtained through the journal's web site http://theory.csail.mit.edu/~iandc or Elsevier's Sciencedirect at http://www.sciencedirect.com/science/journal/08905401

At the end of the year, the retrieval traffic during the open access period will be evaluated as future subscription policies are considered.

Albert R. Meyer, Editor-in-Chief, MIT Computer Science & AI Lab
Chris Leonard, Publishing Editor, Elsevier
Moshe Y. vardi, Associate Editor, Rice University

Those of a more theoretical bent will find lots of interesting articles there.

Trampolining Architectures

Trampolining Architectures
A trampolining architecture is a special case and extension of a monad which is useful in implementing multiprogramming. Five trampolining architectures, operating over the range of two trampolining translations, are presented. The effects of the architectures are cumulative. Some increase the breadth of multiprogramming facilities provided. Others demonstrate the potential for more efficient implementation. Finally, we demonstrate the applicability of trampolining to languages without closures.
We discussed more than once Trampolined Style, but never this paper by the same authors (except Wand). Published one year later, it discusses more options, and more importantly, shows relations between trampolines and monads.

If you are interested in design and implementation of PLs, and you never heard about tramplines - you should.

Causal Nets

Causal Nets
The network approach to computation is more direct and "physical" than the one based on some specific computing devices (like Turing machines). However, the size of a usual -- e.g., Boolean -- network does not reflect the complexity of computing the corresponding function, since a small network may be very hard to find even if it exists. A history of the work of a particular computing device can be described as a network satisfying some restrictions. The size of this network reflects the complexity of the problem, but the restrictions are usually somewhat arbitrary and even awkward. Causal nets are restricted only by determinism (causality) and locality of interaction. Their geometrical characteristics do reflect computational complexities. And various imaginary computer devices are easy to express in their terms. The elementarity of this concept may help bringing geometrical and algebraic (and maybe even physical) methods into the theory of computations. This hope is supported by the grouptheoretical criterion given in this paper for computability from symmetrical initial configurations.
The nets of this paper are different from belief nets or Bayesian nets, which are also known under name "causal nets".

No doubt, modeling the history of computation rather than the current state is nothing new, but this paper tries to find new applications for that.

Accurate step counting

Accurate step counting. Catherine Hope and Graham Hutton.

Starting with an evaluator for a language, an abstract machine for the same language can be mechanically derived using successive program transformations. This has relevance to studying both the space and time properties of programs because these can be estimated by counting transitions of the abstract machine and measuring the size of the additional data structures needed, such as environments and stacks. In this article we use this process to derive a function that accurately counts the number of steps required to evaluate expressions in a simple language.

Somewhat related to a recent discussion (in which I mentioned EOPL1's scetion on deriving a compiler and machine from an interpreter).

The paper is, of course, worth checking out regardless.

On the Unusual Effectiveness of Logic in Computer Science

In 2001, Moshe Vardi organised a workshop devoted to a the topic of The Unusual Effectiveness of Logic in Computer Science with papers presented covering such topics as "Logic as the calculus of computer science" (Vardi) and "Descriptive complexity" (Immerman), and later a gang consisting of Halpern, Harper, Immerman, Kolaitis, Vardi, and Vianu published a likewise named 24 page article in the July 2002 issue of the Bulletin of Symbolic Logic.

The title is derived from Wigner's famous article on The Unreasonable Effectiveness of Mathematics in the Natural Sciences, which was devoted to raising and attempting to answer the important question: why should mathematics have been so useful to natural scientists? With respect to logic, my answer for the effectiveness of LICS is that, while computation is a physical phenomenon, it is a phenomenon that is best understood via powerful abstractions, and the most powerful abstractions we have at the moment are abstractions in mathematical logic, because of the fundamental relationship of Turing completeness to Goedelian incompleteness.

Links derived from Richard Zach's Motivating Intro Logic for Philosophy majors (and others).

More links...

Recovering resources in the pi-calculus

Recovering resources in the pi-calculus

Although limits of resources such as memory or disk usage
are one of the key problems of many communicating applications, most
process algebras fail to take this aspect of mobile and concurrent systems
into account. In order to study this problem, we introduce the Controlled
pi-calculus, an extension of the pi-calculus with a notion of recovery of
unused resources with an explicit (parametrized) garbage-collection and
dead-process elimination. We discuss the definition of garbage-collection
and dead-process elimination for concurrent, communicating applications,
and provide a type-based technique for statically proving resource
bounds. Selected examples are presented and show the potential of the
Controlled pi-calculus.

How to remove a dynamic prompt: static and dynamic delimited continuation operators are equally expressible

The report (by Oleg Kiselyov) shows that shift, control, shift0, etc. delimited continuation operators are macro-expressible in terms of each other. The report thus confirms the result first established by Chung-chieh Shan in Shift to Control. The operators shift, control, control0, shift0 are the members of a single parameterized family, and the standard CPS is sufficient to express their denotational semantics.

The report uses a more uniform method and it formally proves that 'control' implemented via 'shift' indeed has its standard reduction semantics. It is common knowledge that first-class continuations are quite tricky -- and delimited continuations are trickier still. Therefore, a formal proof is a necessity.

On the practical side, the report shows the simplest known Scheme implementations of control, shift0 and control0 (similar to `cupto'). The method in the report lets us design 700 more delimited control operators, which compose stack fragments in arbitrary ways.

I love this sort of thing, and since section 4 includes Scheme code, you can try to skip the theory if you find it intimidating.

I know this stuff can look a bit hairy. If there's interest, I hope Oleg would agree to help people new to this sort of material in understanding sections 2 and 3. But you have to ask nicely...

A Tutorial on Proof Theoretic Foundations of Logic Programming

A Tutorial on Proof Theoretic Foundations of Logic Programming. Paola Bruscoli and Alessio Guglielmi. ICLP'03.

I just glanced through this tutorial, but since I know quite a few LtU readers are into proof theory, I thought I'd share the link.

The POPLmark Challenge

(via TYPES)

How close are we to a world where every paper on programming languages is accompanied by an electronic appendix with machine-checked proofs? To gauge progress in this area, we issue here a set of challenge problems, dubbed the POPLmark Challenge, chosen to exercise many aspects of programming languages that are known to be difficult to formalize.

A valid solution to the challenge will consist of appropriate software tools, a language representation strategy, and a demonstration that this infrastructure is sufficient to formalize the challenge problems.

The POPLmark team explains,

We are not ourselves automated reasoning experts but rather potential users; our impression is that current tools are almost at the point where they can be used routinely. It's time to bring mechanized metatheory to the masses - go to it!

A New Approach to Abstract Syntax with Variable Binding

Pitts and Gabbay, A New Approach to Abstract Syntax with Variable Binding, FAC 2001.

In the lambda calculus, the particular choice of variable names - even free variables - is irrelevant. Names serve two purposes:

  • as mnemonics to help human readers understand their role, and
  • to distinguish variables that are meant to be different and unify variables that are meant to be the same.

In a theory of binders, only the latter purpose is relevant. This is why it's so annoying to have to deal with capture-avoiding substitution, the Barendregt variable convention, gensym, and the like. These are all ways of getting around the fact that we've committed to a particular name when we really shouldn't have cared what the name was.

There are several standard ways to deal with this. Generating fresh names with gensym is awkward and forces the implementer to reimplement binding and substitution from scratch for each new object language; not to mention its violation of referential transparency. de Bruijn indices remove the variable names but are hard to read and understand. Higher-order abstract syntax implements binders in the object language as functions in the metalanguage, in order to reuse the built-in mechanisms of binding and substitution; but by mixing higher-order values in algebraic datatypes, structural induction is hard to recover.

This paper introduces a theory of fresh names that restores algebraic reasoning, referential transparency, and structural induction to algebraic datatypes with a HOAS-like notation for introducing binders into an abstract syntax. This is the set-theoretical basis for the authors' work on FreshML and FreshO'Caml, which we've discussed a little bit on LtU in the past.

XML feed