Lambda Calculus

Bottom-Up beta-Substitution: Uplinks and lambda-DAGs

While classically lambda-expressions are seen as trees, people don't stop trying to use more general graphs for their representation (according to the authors, the ideas go back to Bourbaki in 1954).
Bottom-Up beta-Substitution: Uplinks and lambda-DAGs

If we represent a lambda-calculus term as a DAG rather than a tree, we can efficiently represent
the sharing that arises from beta-reduction, thus avoiding combinatorial explosion
in space. By adding uplinks from a child to its parents, we can efficiently implement
beta-reduction in a bottom-up manner, thus avoiding combinatorial explosion in
time required to search the term in a top-down fashion. We present an algorithm for
performing beta-reduction on lambda-terms represented as uplinked DAGs; describe its proof
of correctness; discuss its relation to alternate techniques such as Lamping graphs,
explicit-substitution calculi and director strings; and present some timings of an implementation.
Besides being both fast and parsimonious of space, the algorithm is particularly
suited to applications such as compilers, theorem provers, and type-manipulation
systems that may need to examine terms in-between reductions—i.e., the “readback”
problem for our representation is trivial. Like Lamping graphs, and unlike director
strings or the suspension lambda calculus, the algorithm functions by side-effecting the term
containing the redex; the representation is not a “persistent” one. The algorithm additionally
has the charm of being quite simple; a complete implementation of the data
structure and algorithm is 180 lines of SML.

So it's efficient in both time and space, interactive, and simple to implement! What else is left to desire?

On Evaluation Contexts, Continuations, and the Rest of Computation

Although already mentioned before, I believe this paper (which reconciles two approaches to defining continuations) deserves a separate story.

On Evaluation Contexts, Continuations, and the Rest of Computation

Continuations are variously understood as representations of the current evaluation context and as representations of the rest of the computation, but these understandings contradict each other: plugging an expression in a context yields a new expression whereas sending an intermediate result to a continuation yields the final answer. We show that continuations-as-evaluation-contexts are the defunctionalized representation of the continuation of a single-step reduction function and that continuations-as-the-rest-of-thecomputation are the continuation of an evaluation function. Furthermore, we show that defunctionalizing the continuation of an evaluator gives rise to the same evaluation contexts as in the single-step reducer. The only difference is how these evaluation contexts are interpreted: a ‘plug’ interpretation yields one-step reduction, whereas a ‘refocus’ interpretation yields evaluation.

From shift and reset to polarized linear logic

By now, shift/reset should be as popular as call/cc was ten years ago. Some think these control operators are even more important in practice than call/cc, and should be directly supported by PLs. I believe, this paper by Chung-chieh Shan will be interesting to many who loves logic and Curry-Howard isomorphism.

From shift and reset to polarized linear logic

Abstract:

Griffin pointed out that, just as the pure lambda-calculus corresponds to intuitionistic logic, a lambda-calculus with first-class continuations corresponds to classical logic. We study how first-class delimited continuations, in the form of Danvy and Filinski’s shift and reset operators, can also be logically interpreted. First, we refine Danvy and Filinski’s type system for shift and reset to distinguish between pure and impure functions. This refinement not only paves the way for answer type polymorphism, which makes more terms typable, but also helps us invert the continuation-passing-style (CPS) transform: any pure lambda-term with an appropriate type is beta-eta-equivalent to the CPS transform of some shift-reset expression. We conclude that the lambda-calculus with shift and reset and the pure lambda-calculus have the same logical interpretation, namely good old intuitionistic logic. Second, we mix delimited continuations with undelimited ones. Informed by the preceding conclusion, we translate the lambda-calculus with shift and reset into a polarized variant of linear logic that integrates classical and intuitionistic reasoning. Extending previous work on the lambda-µ-calculus, this unifying intermediate language expresses computations with and without control effects, on delimited and undelimited continuations, in call-by-value and call-byname settings.

Lambdascope

Lambdascope:
Another optimal implementation of the lambda-calculus

We have presented an implementation of the lambda-calculus
in the spirit of the calculational approach [...], and which
is fully in the traditions of calculi with explicit substitution
and of graph implementations of term rewriting. As far as
we know it is the first such calculus which is optimal in the sense of Levy. Moreover, as far as we know this is the
first optimal calculus featuring only a single scope delimiter
node instead of the usual two, croissants and brackets,
which by force eliminates the problems which are caused by
having more than one scope node [...]. The calculus
is simple, half a page suffices [...] to describe
it, and completely reduction-based (no semantic read-back
in the implementation). As a consequence it can be trivially
implemented in any (modern) programming language.

Remembering our discussion on atoms of PLs (such as scope and name), I decided this paper might be of interest.

Foundational Calculi for Programming Languages (intro)

Since pi calculus is a hot topic lately, Pierce's Foundational Calculi for Programming Languages might be of interest as an introduction. It very briefly introduces and justifies foundational calculi in general, spends about 10 pages on lambda calculus, then builds on that with another 7 pages on pi calculus.

A New Approach to Abstract Syntax with Variable Binding

Pitts and Gabbay, A New Approach to Abstract Syntax with Variable Binding, FAC 2001.

In the lambda calculus, the particular choice of variable names - even free variables - is irrelevant. Names serve two purposes:

  • as mnemonics to help human readers understand their role, and
  • to distinguish variables that are meant to be different and unify variables that are meant to be the same.

In a theory of binders, only the latter purpose is relevant. This is why it's so annoying to have to deal with capture-avoiding substitution, the Barendregt variable convention, gensym, and the like. These are all ways of getting around the fact that we've committed to a particular name when we really shouldn't have cared what the name was.

There are several standard ways to deal with this. Generating fresh names with gensym is awkward and forces the implementer to reimplement binding and substitution from scratch for each new object language; not to mention its violation of referential transparency. de Bruijn indices remove the variable names but are hard to read and understand. Higher-order abstract syntax implements binders in the object language as functions in the metalanguage, in order to reuse the built-in mechanisms of binding and substitution; but by mixing higher-order values in algebraic datatypes, structural induction is hard to recover.

This paper introduces a theory of fresh names that restores algebraic reasoning, referential transparency, and structural induction to algebraic datatypes with a HOAS-like notation for introducing binders into an abstract syntax. This is the set-theoretical basis for the authors' work on FreshML and FreshO'Caml, which we've discussed a little bit on LtU in the past.

Barbara Partee: Reflections of a Formal Semanticist as of Feb 2005

What follows will be a very subjective and personal view, as much my own history and development in the field and how things looked through my eyes as about the development of the field itself.

This essay is a longer version of the introductory essay in (Partee 2004). The introductory essay was first written in this long form in February 2003, but had to be cut down to about half the size to fit in the book...

This essay is about natural language semantics, but you'll find old friends here: lambdas, bindings, types, quantifiers etc. If you are lazy, go directly to footnote 25...

No surprise, really, if you follow the links we give here from time to time about TLGs and such.

The Church Project

http://types.bu.edu or http://www.church-project.org

The Church Project investigates the foundations, design principles and implementation techniques of programming languages and related systems. The overall goal is the development of software technology that performs better and is more reliable. The project is named in honor of Alonzo Church, the inventor of the lambda calculus.

Four major research efforts (supported by various funding bodies including EC, EPSRC, and NSF) are presently undertaken by project participants:

  • Compiling with Flow Types
  • Compositional Analysis
  • Programming with Dependent Types (DML) (Xanadu)
  • Linear Naming and Computation

We previously linked to a document on this site, but not the site itself.

The pi-Calculus in Direct Style

The pi-Calculus in Direct Style

We introduce a calculus which is a direct extension of both the
lambda and the pi calculi. We give a simple type system for it, that
encompasses both Curry's type inference for the lambda-calculus,
and Milner's sorting for the pi-calculus as particular cases of
typing. We observe that the various continuation passing style
transformations for lambda-terms, written in our calculus, actually
correspond to encodings already given by Milner and others
for evaluation strategies of lambda-terms into the pi-calculus. Furthermore,
the associated sortings correspond to well-known
double negation translations on types. Finally we provide an
adequate cps transform from our calculus to the pi-calculus.

This shows that the latter may be regarded as an "assembly
language", while our calculus seems to provide a better
programming
notation for higher-order concurrency.

a.k.a. Blue Calculus

Normal-order direct-style beta-evaluator with syntax-rules, and the repeated applications of call/cc

Oleg's presentation at the workshop in honor of Daniel Friedman is great fun as usual. The topic of repeated applications of call/cc has been mentioned on LtU previously, a few years ago. New this time: the full and correct beta-normalizer written as a direct-style syntax-rule. The normalizer implements calculus of explicit substitutions. The talk presents probably the shortest (and the fastest) normal-order beta-normalizer as a (stand-alone) Scheme macro. Another new feature is the discussion of self-applications of delimited continuation operators. The talk mentions incidentally that shift, control, shift0 and other, less-delimited control operators are the members of the same family: gshift/greset.

Hot stuff.

XML feed