Type Theory

Jumbo Lambda Calculus

Two new papers by Paul Blain Levy, "Jumbo Lambda Calculus" and the extended version "Jumbo Connectives in Type Theory and Logic", are available on his web page. Part of the abstract:

We make an argument that, for any study involving computational effects such as divergence or continuations, the traditional syntax of simply typed lambda-calculus cannot be regarded as canonical, because standard arguments for canonicity rely on isomorphisms that may not exist in an effectful setting. To remedy this, we define a "jumbo lambda-calculus" that fuses the traditional connectives together into more general ones, so-called "jumbo connectives". We provide two pieces of evidence for our thesis that the jumbo formulation is advantageous.

(From the types list.)

Building Interpreters by Composing Monads

Building Interpreters by Composing Monads

We exhibit a set of functions coded in
Haskell that can be used as building blocks to construct
a variety of interpreters for Lisp-like languages. The
building blocks are joined merely through functional
composition. Each building block contributes code to
support a specific feature, such as numbers, continuations,
functions calls, or nondeterminism. The result of
composing some number of building blocks is a parser,
an interpreter, and a printer that support exactly the
expression forms and data types needed for the combined
set of features, and no more.
The data structures are organized as pseudomonads,
a generalization of monads that allows composition.
Functional composition of the building blocks implies
type composition of the relevant pseudomonads.

So actually it is about building interpreters by composing pseudomonads.

PS: I stumbled upon this paper while trying to factor an interpreter into a set of features (and yes, I tried to package them as monads).
After a day of fighting with undecidable instances and rigid type variables I gave up and started googling - well, I was trying to invent a wheel.
Any comments on how pseudomonads relate to arrows (and other generalizations of monads) are appreciated.

Typed Concurrent Programming with Logic Variables

Typed Concurrent Programming with Logic Variables

We present a concurrent higher-order programming language called Plain and a
concomitant static type system. Plain is based on logic variables and computes
with possibly partial data structures. The data structures of Plain are procedures, cells, and records. Plain's type system features record-based subtyping, bounded existential polymorphism, and access modalities distinguishing between reading and writing.

You may want to compare this with The Oz Programming Model (OPM), which

... is a concurrent programming model subsuming higher-order functional and object-oriented programming as facets of a general model. This is particularly interesting for concurrent object-oriented programming, for which no comprehensive formal model existed until now. The model can be extended so that it can express encapsulated problem solvers generalizing the problem solving capabilities of constraint logic programming.

Another paper on OPM is The Operational Semantics of Oz.

In short, the model of Plain is based on that of Oz with the main differences being:

  1. Plain statically types programs using a type system with subtyping, while Oz is latently typed.
  2. Therefore Plain chooses to drop support for unification in favor of a single-assignment operation.

Proofs are Programs: 19th Century Logic and 21st Century Computing

Proofs are Programs: 19th Century Logic and 21st Century Computing

As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase with as many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century.

This paper by Philip Wadler was a Dr Dobbs article in 2000 and has a matching a Netcast interview.

This nifty paper starts with Frege's Begriffschrift in 1879 and goes to Gentzen's sequent calculus, Church's untyped and then typed lambda calculus, and then brings it all together to describe the Curry-Howard correspondence. For the grand finale, Wadler connects this to Theorem Provers and Proof Carrying Code and gives commercial examples where it all pays off.
This is an enjoyable story and a fun introduction to type theory and the Curry-Howard correspondence.

For more of Wadler's writings along these lines check out his History of Logic and Programming Languages paper collection.

edit: fixed the dr dobbs article link

A constraint-based approach to guarded algebraic data types

A constraint-based approach to guarded algebraic data types

We study HMG(X), an extension of the constraint-based type system HM(X) with deep pattern matching, polymorphic recursion, and guarded algebraic data types. Guarded algebraic data types subsume the concepts known in the literature as indexed types, guarded recursive datatype constructors, (first-class) phantom types, and equality qualified types, and are closely related to inductive types. Their characteristic property is to allow every branch of a case construct to be typechecked under different assumptions about the type variables in scope. We prove that HMG(X) is sound and that, provided recursive definitions carry a type annotation, type inference can be reduced to constraint solving. Constraint solving is decidable, at least for some instances of X, but prohibitively expensive. Effective type inference for guarded algebraic data types is left as an issue for future research.

Constraint-based type inference for guarded algebraic data types

Constraint-based type inference for guarded algebraic data types

Guarded algebraic data types subsume the concepts known in the literature as indexed types, guarded recursive datatype constructors, and first-class phantom types, and are closely related to inductive types. They have the distinguishing feature that, when typechecking a function defined by cases, every branch may be checked under different assumptions about the type variables in scope. This mechanism allows exploiting the presence of dynamic tests in the code to produce extra static type information.

We propose an extension of the constraint-based type system HM(X) with deep pattern matching, guarded algebraic data types, and polymorphic recursion. We prove that the type system is sound and that, provided recursive function definitions carry a type annotation, type inference may be reduced to constraint solving. Then, because solving arbitrary constraints is expensive, we further restrict the form of type annotations and prove that this allows producing so-called tractable constraints. Last, in the specific setting of equality, we explain how to solve tractable constraints.

To the best of our knowledge, this is the first generic and comprehensive account of type inference in the presence of guarded algebraic data types.

Inverse typechecker and theorem proving in intuitionistic and classical logics

Another cool demonstration from Oleg:

I'd like to point out a different take on Djinn:

http://cvs.sourceforge.net/viewcvs.py/kanren/kanren/mini/type-inference.scm

http://cvs.sourceforge.net/viewcvs.py/kanren/kanren/mini/logic.scm

The first defines the Hindley-Milner typechecking relation for a
language with polymorphic let, sums and products. We use the Scheme
notation for the source language (as explained at the beginning of the
first file); ML or Haskell-like notations are straightforward. The
notation for type terms is infix, with the right-associative arrow.

The typechecking relation relates a term and its type: given a term we
obtain its type. The relation is pure and so it can work in reverse: given a type, we can obtain terms that have this type. Or, we can give a term with blanks and a type with blanks, and ask the relation to fill in the blanks.

As an example, the end of the file type-inference.scm shows the derivation for the terms call/cc, shift and reset from their types in the continuation monad. Given the type

(((a -> . ,(cont 'b 'r)) -> . ,(cont 'b 'b)) -> . ,(cont 'a 'b))

we get the expression for shift:

   (lambda (_.0) (lambda (_.1)
	((_.0 (lambda (_.2) (lambda (_.3) (_.3 (_.1 _.2)))))
	 (lambda (_.4) _.4))))

It took only 2 milli-seconds.

More interesting is using the typechecker for proving theorems in
intuitionistic logic: see logic.scm. We formulate the proposition in types, for example:

  (,(neg '(a * b)) -> . ,(neg (neg `(,(neg 'a) + ,(neg 'b)))))

This is one direction of the deMorgan law. In intuitionistic logic,
deMorgan law is more involved:

	NOT (A & B) == NOTNOT (NOT A | NOT B)

The system gives us the corresponding term, the proof:

(lambda (_.0)
      (lambda (_.1) 
	(_.1 (inl (lambda (_.2) 
		    (_.1 (inr (lambda (_.3) (_.0 (cons _.2 _.3))))))))))

The de-typechecker can also prove theorems in classical logic,
via double-negation (aka CPS) translation. The second part of
logic.scm demonstrates that. We can formulate a proposition:

(neg (neg `(,(neg 'a) + ,(neg (neg 'a)))))

and get a (non-trivial) term

	(lambda (_.0) (_.0 (inr (lambda (_.1) (_.0 (inl _.1))))))

It took only 403 ms. The proposition is the statement of the Law of
Excluded Middle, in the double-negative translation.

So, programming languages can help in the study of logic.

Exceptional syntax

A nice paper apropos of tail calls and exceptions:
Nick Benton and Andrew Kennedy. 2001. Exceptional syntax. Journal of Functional Programming 11(4): 395-410.

From the points of view of programming pragmatics, rewriting and operational semantics, the syntactic construct used for exception handling in ML-like programming languages, and in much theoretical work on exceptions, has subtly undesirable features. We propose and discuss a more well-behaved construct.

Module Mania: A Type-Safe, Separately Compiled, Extensible Interpreter

Module Mania: A Type-Safe, Separately Compiled, Extensible Interpreter

To illustrate the utility of a powerful modules language, this paper presents the embedded interpreter Lua-ML. The interpreter combines extensibility and separate compilation without compromising type safety. Its types are extended by applying a sum constructor to built-in types and to extensions, then tying a recursive knot using a two-level type; the sum constructor is written using an ML functor. The initial basis is extended by composing initialization functions from individual extensions, also using ML functors.

This is an excellent example of how the ML module language doesn't merely provide encapsulation but also strictly adds expressive power. It also demonstrates how a dynamic language (Lua) can be embedded in the statically-typed context of ML. Finally, it demonstrates that none of this need come at the expense of separate compilation or extensibility. Norman Ramsey's work is always highly recommended.

XML feed