Lambda Calculus

Gradual Typing for Functional Languages

Gradual Typing for Functional Languages

Static and dynamic type systems have well-known strengths and weaknesses, and each is better suited for different programming tasks. There have been many efforts to integrate static and dynamic typing and thereby combine the benefits of both typing disciplines in the same language. The flexibility of static typing can be improved by adding a type Dynamic and a typecase form. The safety and performance of dynamic typing can be improved by adding optional type annotations or by performing type inference (as in soft typing). However, there has been little formal work on type systems that allow a programmer-controlled migration between dynamic and static typing. Thatte proposed Quasi-Static Typing, but it does not statically catch all type errors in completely annotated programs. Anderson and Drossopoulou defined a nominal type system for an object-oriented language with optional type annotations. However, developing a sound, gradual type system for functional languages with structural types is an open problem.

In this paper we present a solution based on the intuition that the structure of a type may be partially known/unknown at compile-time and the job of the type system is to catch incompatibilities between the known parts of types. We define the static and dynamic semantics of a λ-calculus with optional type annotations and we prove that its type system is sound with respect to the simply-typed λ-calculus for fully-annotated terms. We prove that this calculus is type safe and that the cost of dynamism is “pay-as-you-go”.

In other news, the Holy Grail has been found. Film at 11.

This piece of genius is the combined work of Jeremy Siek, of Boost fame, and Walid Taha, of MetaOCaml fame. The formalization of their work in Isabelle/Isar can be found here.

I found this while tracking down the relocated Concoqtion paper. In that process, I also found Jeremy Siek's other new papers, including his "Semantic Analysis of C++ Templates" and "Concepts: Linguistic Support for Generic Programming in C++." Just visit Siek's home page and read all of his new papers, each of which is worth a story here.

Ivor, a proof engine

I found an interesting new paper by Edwin Brady.

Abstract. Dependent type theory has several practical applications in
the fields of theorem proving, program verifcation and programming
language design. Ivor is a Haskell library designed to allow easy extend-
ing and embedding of a type theory based theorem prover in a Haskell
application. In this paper, I give an overview of the library and show
how it can be used to implement formal systems such as propositional
logic. Furthermore, I sketch an implementation of a simple functional
programming language using the library; by using type theory as a core
representation, we can construct and evaluate terms and prove correct-
ness properties of those terms within the same framework, ensuring con-
sistency of the implementation and the theorem prover.

Jumbo Lambda Calculus

Two new papers by Paul Blain Levy, "Jumbo Lambda Calculus" and the extended version "Jumbo Connectives in Type Theory and Logic", are available on his web page. Part of the abstract:

We make an argument that, for any study involving computational effects such as divergence or continuations, the traditional syntax of simply typed lambda-calculus cannot be regarded as canonical, because standard arguments for canonicity rely on isomorphisms that may not exist in an effectful setting. To remedy this, we define a "jumbo lambda-calculus" that fuses the traditional connectives together into more general ones, so-called "jumbo connectives". We provide two pieces of evidence for our thesis that the jumbo formulation is advantageous.

(From the types list.)

Lambda Calculus course (Oxford)

A sixteen-hour lecture course aimed at final-year computer science undergraduates and MSc students.

The course notes are detailed and include things like call-by-value LC, Boehm Theorem, and several interesting mini-projects for students.

Inverse typechecker and theorem proving in intuitionistic and classical logics

Another cool demonstration from Oleg:

I'd like to point out a different take on Djinn:

http://cvs.sourceforge.net/viewcvs.py/kanren/kanren/mini/type-inference.scm

http://cvs.sourceforge.net/viewcvs.py/kanren/kanren/mini/logic.scm

The first defines the Hindley-Milner typechecking relation for a
language with polymorphic let, sums and products. We use the Scheme
notation for the source language (as explained at the beginning of the
first file); ML or Haskell-like notations are straightforward. The
notation for type terms is infix, with the right-associative arrow.

The typechecking relation relates a term and its type: given a term we
obtain its type. The relation is pure and so it can work in reverse: given a type, we can obtain terms that have this type. Or, we can give a term with blanks and a type with blanks, and ask the relation to fill in the blanks.

As an example, the end of the file type-inference.scm shows the derivation for the terms call/cc, shift and reset from their types in the continuation monad. Given the type

(((a -> . ,(cont 'b 'r)) -> . ,(cont 'b 'b)) -> . ,(cont 'a 'b))

we get the expression for shift:

   (lambda (_.0) (lambda (_.1)
	((_.0 (lambda (_.2) (lambda (_.3) (_.3 (_.1 _.2)))))
	 (lambda (_.4) _.4))))

It took only 2 milli-seconds.

More interesting is using the typechecker for proving theorems in
intuitionistic logic: see logic.scm. We formulate the proposition in types, for example:

  (,(neg '(a * b)) -> . ,(neg (neg `(,(neg 'a) + ,(neg 'b)))))

This is one direction of the deMorgan law. In intuitionistic logic,
deMorgan law is more involved:

	NOT (A & B) == NOTNOT (NOT A | NOT B)

The system gives us the corresponding term, the proof:

(lambda (_.0)
      (lambda (_.1) 
	(_.1 (inl (lambda (_.2) 
		    (_.1 (inr (lambda (_.3) (_.0 (cons _.2 _.3))))))))))

The de-typechecker can also prove theorems in classical logic,
via double-negation (aka CPS) translation. The second part of
logic.scm demonstrates that. We can formulate a proposition:

(neg (neg `(,(neg 'a) + ,(neg (neg 'a)))))

and get a (non-trivial) term

	(lambda (_.0) (_.0 (inr (lambda (_.1) (_.0 (inl _.1))))))

It took only 403 ms. The proposition is the statement of the Law of
Excluded Middle, in the double-negative translation.

So, programming languages can help in the study of logic.

Systematic search for lambda expressions

Systematic search for lambda expressions by Susumu Katayama

This paper presents a system for searching for desired small functional programs by just generating a sequence of type-correct programs in a systematic and exhaustive manner and evaluating them. The main goal of this line of research is to ease functional programming, along with the subgoal to provide an axis to evaluate heuristic approaches to program synthesis such as genetic programming by telling the best performance possible by exhaustive search algorithms. While our previous approach to that goal used combinatory expressions in order to simplify the synthesis process, which led to redundant combinator expressions with complex types, this time we use de Bruijn lambda expressions and enjoy improved results.

Although the algorithm is slow and only works on small expressions, it looks helpful. Perhaps there could be some synthesis with Hoogλe, or whatever the name is after Google makes them change it.

What good is Strong Normalization in Programming Languages?

There's a neat thread about strong normalization happening on The Types Forum.

If you've ever wondered Why is it useful to have {type systems,reductions,operations,...} that always terminate? this may Illuminate you.
Here are some snippets to give you a feel for the discussion:

I think it is clearer to split Gérard's original question in two:

  1. Is strong normalization useful for programming languages ?
  2. Is the proof of strong normalization via type systems applicable for programming languages?

-- François-Régis Sinot

Termination is good:

If a program is a solution to a problem then knowing that it terminates
for any input is an important aspect of it being a solution. Often the
best way to see that it is terminating is expressing it in a language
where all programs terminate. The programming language Epigram is an
example of an experimental language which is intended to be terminating
(even though not all conditions are enforced in the current version), see
http://www.e-pig.org/ for more information.

-- Thorsten Altenkirch

Termination is good!

I think the moral sense of strong normalization is that a program
in a strictly-typed language can only diverge as a result of some
programming construct, which _explicitly_ permits looping, like
iteration, recursion etc. My favourite example here is that the
"big Omega" can be written in Algol 60, because procedure types
in this language are not fully specified.

-- Pawel Urzyczyn

Termination is good and with fixpoints is turing complete?

Another way to put this is that data structures should be definable in a
strongly-normalising language so that data access, etc. is guaranteed to
terminate. Then add fixpoints or loops to describe arbitrary computations.

-- Barry Jay

Terminating reductions allows exhaustive applications of optimizations:

In a compiler, if a set of reductions is strongly normalizing, then the compiler can apply them exhaustively to an intermediate-language term without fear of looping. (We rely on this in the MLj and SML.NET compilers' "simplify" compilation phases, which apply simple reductions and directed commuting conversions until a normal form is reached. Though it has to be said that it's not the classical SN results that are relevant here, but something more specific to our intermediate language and the simplifying reductions that we employ).

-- Andrew Kenney

Rene Vestergaard also gave a link to a 2004 discussion of strong normalization on the rewriting list.

adbmaL

... or how is the title of this paper pronounced?
We make the notion of scope in the lambda-calculus explicit. To that end, the syntax of the lambda-calculus is extended with an end-of-scope operator [adbmal], matching the usual opening of a scope due to lambda. Accordingly, beta-reduction is extended to the set of scoped lambda-terms by performing minimal scope extrusion before performing replication as usual. We show confluence of the resulting scoped beta-reduction. Confluence of beta-reduction for the ordinary lambda-calculus is obtained as a corollary, by extruding scopes maximally before forgetting them altogether. Only in this final forgetful step, alpha-equivalence is needed. All our proofs have been verified in Coq.
While playing with my own lambda-machine (derivative of CEK in Java) I decided that I would like to control scope better - so I found this paper.

See also Lambdascope previously mentioned on LtU.

A Concurrent Lambda Calculus with Futures

A Concurrent Lambda Calculus with Futures

We introduce a new concurrent lambda calculus with futures, lambda(fut), to model the operational semantics of Alice, a concurrent extension of ML. lambda(fut) is a minimalist extension of the call-by-value lambda-calculus that yields the full expressiveness to define, combine, and implement a variety of standard concurrency constructs such as channels, semaphores, and ports. We present a linear type system for lambda(fut) by which the safety of such definitions and their combinations can be proved: Well-typed implementations cannot be corrupted in any well-typed context.

To all the fans of Mozart and especially Stockhausen :-)

A Formulae-as-Types Interpretation of Subtractive Logic

A Formulae-as-Types Interpretation of Subtractive Logic

We present a formulae-as-types interpretation of Subtractive Logic (i.e. bi-intuitionistic logic). This presentation is two-fold: we first define a very natural restriction of the lambda-μ-calculus which is closed under
reduction and whose type system is a constructive restriction of the Classical Natural Deduction. Then we extend this deduction system conservatively to Subtractive Logic. From a computational standpoint,
the resulting calculus provides a type system for first-class coroutines (a restricted form of first-class continuations).

Yet another connection between subtractive logic and control. I remember the author mentioned on LtU, but I cannot find any citations.

XML feed