Type Theory

ClassicJava in PLT Redex

Classic Java

This collection is an implementation of (most of) ClassicJava, as defined
in "A Programmer's Reduction Semantics for Classes and Mixins," by Matthew
Flatt, Shriram Krishnamurthi, and Matthias Felleisen; in _Formal Syntax and
Semantics of Java_, Springer-Verlag LNCS 1523, pp. 241-269, 1999. A
tech-report version of the paper is also available at
<http://www.ccs.neu.edu/scheme/pubs/#tr97-293. The implementation is
written in PLT Redex, also available through PLaneT. Please consult that
package's documentation for further details.

This might be interesting to folks curious about how to formalize a real language, or about how PLT Redex works in practice.

Generalized Algebraic Data Types and Object-Oriented Programming

Generalized Algebraic Data Types and Object-Oriented Programming. Andrew Kennedy and Claudio Russo. OOPSLA, October 2005, San Diego, California.

Generalized algebraic data types (GADTs) have received much attention recently in the functional programming community. They generalize the type-parameterized datatypes of ML and Haskell by permitting constructors to produce different type-instantiations of the same datatype. GADTs have a number of applications, including strongly-typed evaluators, generic pretty-printing, generic traversals and queries, and typed LR parsing. We show that existing object-oriented programming languages such as Java and C# can express GADT definitions, and a large class of GADT-manipulating programs, through the use of generics, subclassing, and virtual dispatch. However, some programs can be written only through the use of redundant run-time casts. We propose a generalization of the type constraint mechanisms of C# and Java to avoid the need for such casts, present a Visitor pattern for GADTs, and describe a switch construct as an alternative to virtual dispatch on datatypes. We formalize both extensions and prove a type soundness result.

I've been waiting for awhile for this paper to be available online.

This paper is, of course, related to the other items posted here about GADTs. The examples in the introduction might be somewhat relevant to the recent discussion about the static versus dynamic features of Java, and its type system.

What good is Strong Normalization in Programming Languages?

There's a neat thread about strong normalization happening on The Types Forum.

If you've ever wondered Why is it useful to have {type systems,reductions,operations,...} that always terminate? this may Illuminate you.
Here are some snippets to give you a feel for the discussion:

I think it is clearer to split Gérard's original question in two:

  1. Is strong normalization useful for programming languages ?
  2. Is the proof of strong normalization via type systems applicable for programming languages?

-- François-Régis Sinot

Termination is good:

If a program is a solution to a problem then knowing that it terminates
for any input is an important aspect of it being a solution. Often the
best way to see that it is terminating is expressing it in a language
where all programs terminate. The programming language Epigram is an
example of an experimental language which is intended to be terminating
(even though not all conditions are enforced in the current version), see
http://www.e-pig.org/ for more information.

-- Thorsten Altenkirch

Termination is good!

I think the moral sense of strong normalization is that a program
in a strictly-typed language can only diverge as a result of some
programming construct, which _explicitly_ permits looping, like
iteration, recursion etc. My favourite example here is that the
"big Omega" can be written in Algol 60, because procedure types
in this language are not fully specified.

-- Pawel Urzyczyn

Termination is good and with fixpoints is turing complete?

Another way to put this is that data structures should be definable in a
strongly-normalising language so that data access, etc. is guaranteed to
terminate. Then add fixpoints or loops to describe arbitrary computations.

-- Barry Jay

Terminating reductions allows exhaustive applications of optimizations:

In a compiler, if a set of reductions is strongly normalizing, then the compiler can apply them exhaustively to an intermediate-language term without fear of looping. (We rely on this in the MLj and SML.NET compilers' "simplify" compilation phases, which apply simple reductions and directed commuting conversions until a normal form is reached. Though it has to be said that it's not the classical SN results that are relevant here, but something more specific to our intermediate language and the simplifying reductions that we employ).

-- Andrew Kenney

Rene Vestergaard also gave a link to a 2004 discussion of strong normalization on the rewriting list.

Extensible Records With Scoped Labels

Extensible Records With Scoped Labels
Daan Leijen
Records provide a safe and flexible way to construct data structures. We describe a natural approach to typing polymorphic and extensible records that is simple, easy to use in practice, and straightforward to implement. A novel aspect of this work is that records can contain duplicate labels, effectively introducing a form of scoping over the labels. Furthermore, it is a fully orthogonal extension to existing type systems and programming languages...
Last time one of Daan's papers was mentioned, there was a very positive response (after Frank reminded us a couple of times). This is equally good: clever, elegant, and clearly presented.

(Between this and the Sheard paper, it's a good week for practical type systems...)

Putting Curry-Howard to Work

via LtU forums:

Putting Curry-Howard to Work

The Curry-Howard isomorphism states that types are propositions and that programs are proofs. This allows programmers to state and enforce invariants of programs by using types. Unfortunately, the type systems of today's functional languages cannot directly express interesting properties of programs. To alleviate this problem, we propose the addition of three new features to functional programming languages such as Haskell: Generalized Algebraic Datatypes, Extensible Kind Systems, and the generation, propagation, and discharging of Static Propositions. These three new features are backward compatible with existing features, and combine to enable a new programming paradigm for functional programmers. This paradigm makes it possible to state and enforce interesting properties of programs using the type system, and it does this in manner that leaves intact the functional programming style, known and loved by functional programmers everywhere.
The paper is short and reasonably easy to read, also the examples are very realistic - check section 13 as a starter.

Getting half through the paper made we willing to play with implementation, getting to the last page made my hands itching to implement my own PL with a similar type system.

You are warned :)

Just What is it that Makes Martin-Lof's Type Theory so Different, so Appealing?

Martin-Löf's Type Theory (M-LTT) was developed by Per Martin-Löf in the early 1970s, as a constructive foundation for mathematics. The theory is clear and simple. Because of this it has been used for everything from program derivation to handling donkey sentences.

In this talk we will present and give a flavour of Martin-Löf's Type Theory. We will highlight the use of proof theoretical justification of the rules, without explaining this in full detail. We will present the rules for various types, emphasising the uniformity of the presentation. The types that we will present will include those needed to express the logical constants falsum, or, and, not, implies, for all, and there exists.

Yet another propositions-as-types tutorial.

It seems pretty easy to follow, and it is quite short (16 pages).

P.S

Why all the theory on LtU these days? Because the editors who keep posting are into theory. The other editors should be nudged to start posting cool hands-on stuff...

Linear types for aliased resources

Linear types for aliased resources. Chris Hawblitzel.

Type systems that track aliasing can verify state-dependent program properties. For example, such systems can verify that a program does not access a resource after deallocating the resource. The simplest way to track aliasing is to use linear types, which on the surface appear to ban the aliasing of linear resources entirely. Since banning aliasing is considered too draconian for many practical programs, researchers have proposed type systems that allow limited forms of aliasing, without losing total control over state-dependent properties. This paper describes how to encode one such system, the capability calculus, using a type system based on plain linear types with no special support for aliasing. Given well-typed capability calculus source programs, the encodings produce well-typed target programs based on linear types. These encodings demonstrate that, contrary to common expectations, linear type systems can express aliasing of linear resources.

The prose is slightly confusing, but the code examples help.

Sections 1 & 2 should be enough for those who only want to understand the problem with aliasing, and get a glimpse of what linear types are.

Number-Parameterized Types by Oleg Kiselyov

Number-Parameterized Types by Oleg Kiselyov

from the abstract:

This paper describes practical programming with types parameterized by numbers: e.g., an array type parameterized by the array's size or a modular group type Zn parameterized by the modulus. An attempt to add, for example, two integers of different moduli should result in a compile-time error with a clear error message. Number-parameterized types let the programmer capture more invariants through types and eliminate some run-time checks.

Oleg shows several approaches towards encoding numbers into types and using those numbers to check list length, matching sizes for matrices or vectors. Oleg also points out connections to dependent types, phantom types, and shape-invariant programming.

A Type Discipline for Authorization Policies

Cedric Fournet; Andrew D. Gordon; Sergio Maffeis. A Type Discipline for Authorization Policies. ESOP 2005.

Distributed systems and applications are often expected to enforce high-level authorization policies. To this end, the code for these systems relies on lower-level security mechanisms such as, for instance, digital signatures, local ACLs, and encrypted communications. In principle, authorization specifications can be separated from code and carefully audited. Logic programs, in particular, can express policies in a simple, abstract manner. We consider the problem of checking whether a distributed implementation based on communication channels and cryptography complies with a logical authorization policy. We formalize authorization policies and their connection to code by embedding logical predicates and claims within a process calculus. We formulate policy compliance operationally by composing a process model of the distributed system with an arbitrary opponent process. Moreover, we propose a new dependent type system for verifying policy compliance of implementation code. Using Datalog as an authorization logic, we show how to type several examples using policies and present a general schema for compiling policies.

Another "extreme" use of static typing...

Guarded Induction and Weakly Final Coalgebras in Dependent Type Theory

Quite a mouthful, I wonder if it would attract more people if it were titled "Interactive programming in Epigram" :-)

Guarded Induction and Weakly Final Coalgebras in Dependent Type Theory

In this article we are going to explore one approach to the representation of interactive programs in dependent type theory. The approach we take centres on the concept of "monadic IO" used in functional programming. We will see that in dependent type theory, besides non-dependent interactive programs in which the interface between the user and the real world is fixed, as in ordinary functional programming, there is a natural notion of state-dependent interactive programs, in which the interface changes over time.
XML feed