Type Theory
Proofs are Programs: 19th Century Logic and 21st Century Computing
As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital storedprogram computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase with as many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century.
This paper by Philip Wadler was a Dr Dobbs article in 2000 and has a matching a Netcast interview.
This nifty paper starts with Frege's Begriffschrift in 1879 and goes to Gentzen's sequent calculus, Church's untyped and then typed lambda calculus, and then brings it all together to describe the CurryHoward correspondence. For the grand finale, Wadler connects this to Theorem Provers and Proof Carrying Code and gives commercial examples where it all pays off.
This is an enjoyable story and a fun introduction to type theory and the CurryHoward correspondence.
For more of Wadler's writings along these lines check out his History of Logic and Programming Languages paper collection.
edit: fixed the dr dobbs article link
A constraintbased approach to guarded algebraic data types
We study HMG(X), an extension of the constraintbased type system HM(X) with deep pattern matching, polymorphic recursion, and guarded algebraic data types. Guarded algebraic data types subsume the concepts known in the literature as indexed types, guarded recursive datatype constructors, (firstclass) phantom types, and equality qualified types, and are closely related to inductive types. Their characteristic property is to allow every branch of a case construct to be typechecked under different assumptions about the type variables in scope. We prove that HMG(X) is sound and that, provided recursive definitions carry a type annotation, type inference can be reduced to constraint solving. Constraint solving is decidable, at least for some instances of X, but prohibitively expensive. Effective type inference for guarded algebraic data types is left as an issue for future research.
Constraintbased type inference for guarded algebraic data types
Guarded algebraic data types subsume the concepts known in the literature as indexed types, guarded recursive datatype constructors, and firstclass phantom types, and are closely related to inductive types. They have the distinguishing feature that, when typechecking a function defined by cases, every branch may be checked under different assumptions about the type variables in scope. This mechanism allows exploiting the presence of dynamic tests in the code to produce extra static type information.
We propose an extension of the constraintbased type system HM(X) with deep pattern matching, guarded algebraic data types, and polymorphic recursion. We prove that the type system is sound and that, provided recursive function definitions carry a type annotation, type inference may be reduced to constraint solving. Then, because solving arbitrary constraints is expensive, we further restrict the form of type annotations and prove that this allows producing socalled tractable constraints. Last, in the specific setting of equality, we explain how to solve tractable constraints.
To the best of our knowledge, this is the first generic and comprehensive account of type inference in the presence of guarded algebraic data types.
Another cool demonstration from Oleg:
I'd like to point out a different take on Djinn:
http://cvs.sourceforge.net/viewcvs.py/kanren/kanren/mini/typeinference.scm
http://cvs.sourceforge.net/viewcvs.py/kanren/kanren/mini/logic.scm
The first defines the HindleyMilner typechecking relation for a
language with polymorphic let, sums and products. We use the Scheme
notation for the source language (as explained at the beginning of the
first file); ML or Haskelllike notations are straightforward. The
notation for type terms is infix, with the rightassociative arrow.
The typechecking relation relates a term and its type: given a term we
obtain its type. The relation is pure and so it can work in reverse: given a type, we can obtain terms that have this type. Or, we can give a term with blanks and a type with blanks, and ask the relation to fill in the blanks.
As an example, the end of the file typeinference.scm shows the derivation for the terms call/cc, shift and reset from their types in the continuation monad. Given the type
(((a > . ,(cont 'b 'r)) > . ,(cont 'b 'b)) > . ,(cont 'a 'b))
we get the expression for shift:
(lambda (_.0) (lambda (_.1)
((_.0 (lambda (_.2) (lambda (_.3) (_.3 (_.1 _.2)))))
(lambda (_.4) _.4))))
It took only 2 milliseconds.
More interesting is using the typechecker for proving theorems in
intuitionistic logic: see logic.scm. We formulate the proposition in types, for example:
(,(neg '(a * b)) > . ,(neg (neg `(,(neg 'a) + ,(neg 'b)))))
This is one direction of the deMorgan law. In intuitionistic logic,
deMorgan law is more involved:
NOT (A & B) == NOTNOT (NOT A  NOT B)
The system gives us the corresponding term, the proof:
(lambda (_.0)
(lambda (_.1)
(_.1 (inl (lambda (_.2)
(_.1 (inr (lambda (_.3) (_.0 (cons _.2 _.3))))))))))
The detypechecker can also prove theorems in classical logic,
via doublenegation (aka CPS) translation. The second part of
logic.scm demonstrates that. We can formulate a proposition:
(neg (neg `(,(neg 'a) + ,(neg (neg 'a)))))
and get a (nontrivial) term
(lambda (_.0) (_.0 (inr (lambda (_.1) (_.0 (inl _.1)))))) It took only 403 ms. The proposition is the statement of the Law of
Excluded Middle, in the doublenegative translation.
So, programming languages can help in the study of logic.
A nice paper apropos of tail calls and exceptions:
Nick Benton and Andrew Kennedy. 2001. Exceptional syntax. Journal of
Functional Programming 11(4): 395410.
From the points of view of programming pragmatics, rewriting and operational semantics,
the syntactic construct used for exception handling in MLlike programming languages,
and in much theoretical work on exceptions, has subtly undesirable features. We propose
and discuss a more wellbehaved construct.
Module Mania: A TypeSafe, Separately Compiled, Extensible Interpreter
To illustrate the utility of a powerful modules language, this paper presents the embedded interpreter LuaML. The interpreter combines extensibility and separate compilation without compromising type safety. Its types are extended by applying a sum constructor to builtin types and to extensions, then tying a recursive knot using a twolevel type; the sum constructor is written using an ML functor. The initial basis is extended by composing initialization functions from individual extensions, also using ML functors.
This is an excellent example of how the ML module language doesn't merely provide encapsulation but also strictly adds expressive power. It also demonstrates how a dynamic language (Lua) can be embedded in the staticallytyped context of ML. Finally, it demonstrates that none of this need come at the expense of separate compilation or extensibility. Norman Ramsey's work is always highly recommended.
Generalized Algebraic Data Types and ObjectOriented Programming. Andrew Kennedy and Claudio Russo. OOPSLA, October 2005, San Diego, California.
Generalized algebraic data types (GADTs) have received much attention recently in the functional programming community. They generalize the typeparameterized datatypes of ML and Haskell by permitting constructors to produce different typeinstantiations of the same datatype. GADTs have a number of applications, including stronglytyped evaluators, generic prettyprinting, generic traversals and queries, and typed LR parsing. We show that existing objectoriented programming languages such as Java and C# can express GADT definitions, and a large class of GADTmanipulating programs, through the use of generics, subclassing, and virtual dispatch. However, some programs can be written only through the use of redundant runtime casts. We propose a generalization of the type constraint mechanisms of C# and Java to avoid the need for such casts, present a Visitor pattern for GADTs, and describe a switch construct as an alternative to virtual dispatch on datatypes. We formalize both extensions and prove a type soundness result.
I've been waiting for awhile for this paper to be available online.
This paper is, of course, related to the other items posted here about GADTs. The examples in the introduction might be somewhat relevant to the recent discussion about the static versus dynamic features of Java, and its type system.
There's a neat thread about strong normalization happening on The Types Forum.
If you've ever wondered Why is it useful to have {type systems,reductions,operations,...} that always terminate? this may Illuminate you.
Here are some snippets to give you a feel for the discussion:
I think it is clearer to split GÃ©rard's original question in two:
 Is strong normalization useful for programming languages ?
 Is the proof of strong normalization via type systems applicable for programming languages?
 FranÃ§oisRÃ©gis Sinot
Termination is good:
If a program is a solution to a problem then knowing that it terminates
for any input is an important aspect of it being a solution. Often the
best way to see that it is terminating is expressing it in a language
where all programs terminate. The programming language Epigram is an
example of an experimental language which is intended to be terminating
(even though not all conditions are enforced in the current version), see
http://www.epig.org/ for more information.
 Thorsten Altenkirch
Termination is good!
I think the moral sense of strong normalization is that a program
in a strictlytyped language can only diverge as a result of some
programming construct, which _explicitly_ permits looping, like
iteration, recursion etc. My favourite example here is that the
"big Omega" can be written in Algol 60, because procedure types
in this language are not fully specified.
 Pawel Urzyczyn
Termination is good and with fixpoints is turing complete?
Another way to put this is that data structures should be definable in a
stronglynormalising language so that data access, etc. is guaranteed to
terminate. Then add fixpoints or loops to describe arbitrary computations.
 Barry Jay
Terminating reductions allows exhaustive applications of optimizations:
In a compiler, if a set of reductions is strongly normalizing, then the compiler can apply them exhaustively to an intermediatelanguage term without fear of looping. (We rely on this in the MLj and SML.NET compilers' "simplify" compilation phases, which apply simple reductions and directed commuting conversions until a normal form is reached. Though it has to be said that it's not the classical SN results that are relevant here, but something more specific to our intermediate language and the simplifying reductions that we employ).
 Andrew Kenney
Rene Vestergaard also gave a link to a 2004 discussion of strong normalization on the rewriting list.

Recent comments
3 days 23 hours ago
4 days 18 hours ago
4 days 22 hours ago
4 days 22 hours ago
5 days 2 min ago
5 days 18 min ago
5 days 1 hour ago
5 days 1 hour ago
5 days 5 hours ago
5 days 5 hours ago