Type Theory

Linear types for aliased resources

Linear types for aliased resources. Chris Hawblitzel.

Type systems that track aliasing can verify state-dependent program properties. For example, such systems can verify that a program does not access a resource after deallocating the resource. The simplest way to track aliasing is to use linear types, which on the surface appear to ban the aliasing of linear resources entirely. Since banning aliasing is considered too draconian for many practical programs, researchers have proposed type systems that allow limited forms of aliasing, without losing total control over state-dependent properties. This paper describes how to encode one such system, the capability calculus, using a type system based on plain linear types with no special support for aliasing. Given well-typed capability calculus source programs, the encodings produce well-typed target programs based on linear types. These encodings demonstrate that, contrary to common expectations, linear type systems can express aliasing of linear resources.

The prose is slightly confusing, but the code examples help.

Sections 1 & 2 should be enough for those who only want to understand the problem with aliasing, and get a glimpse of what linear types are.

Number-Parameterized Types by Oleg Kiselyov

Number-Parameterized Types by Oleg Kiselyov

from the abstract:

This paper describes practical programming with types parameterized by numbers: e.g., an array type parameterized by the array's size or a modular group type Zn parameterized by the modulus. An attempt to add, for example, two integers of different moduli should result in a compile-time error with a clear error message. Number-parameterized types let the programmer capture more invariants through types and eliminate some run-time checks.

Oleg shows several approaches towards encoding numbers into types and using those numbers to check list length, matching sizes for matrices or vectors. Oleg also points out connections to dependent types, phantom types, and shape-invariant programming.

A Type Discipline for Authorization Policies

Cedric Fournet; Andrew D. Gordon; Sergio Maffeis. A Type Discipline for Authorization Policies. ESOP 2005.

Distributed systems and applications are often expected to enforce high-level authorization policies. To this end, the code for these systems relies on lower-level security mechanisms such as, for instance, digital signatures, local ACLs, and encrypted communications. In principle, authorization specifications can be separated from code and carefully audited. Logic programs, in particular, can express policies in a simple, abstract manner. We consider the problem of checking whether a distributed implementation based on communication channels and cryptography complies with a logical authorization policy. We formalize authorization policies and their connection to code by embedding logical predicates and claims within a process calculus. We formulate policy compliance operationally by composing a process model of the distributed system with an arbitrary opponent process. Moreover, we propose a new dependent type system for verifying policy compliance of implementation code. Using Datalog as an authorization logic, we show how to type several examples using policies and present a general schema for compiling policies.

Another "extreme" use of static typing...

Guarded Induction and Weakly Final Coalgebras in Dependent Type Theory

Quite a mouthful, I wonder if it would attract more people if it were titled "Interactive programming in Epigram" :-)

Guarded Induction and Weakly Final Coalgebras in Dependent Type Theory

In this article we are going to explore one approach to the representation of interactive programs in dependent type theory. The approach we take centres on the concept of "monadic IO" used in functional programming. We will see that in dependent type theory, besides non-dependent interactive programs in which the interface between the user and the real world is fixed, as in ordinary functional programming, there is a natural notion of state-dependent interactive programs, in which the interface changes over time.

Automatic type inference via partial evaluation

Automatic type inference via partial evaluation. Aaron Tomb, Cormac Flanagan. PPDP’05.

Type checking and type inference are fundamentally similar problems. However, the algorithms for performing the two operations, on the same type system, often differ significantly. The type checker is typically a straightforward encoding of the original type rules. For many systems, type inference is performed using a two-phase, constraint-based algorithm.We present an approach that, given the original type rules written as clauses in a logic programming language, automatically generates an efficient, two-phase, constraint-based type inference algorithm. Our approach works by partially evaluating the type checking rules with respect to the target program to yield a set of constraints suitable for input to an external constraint solver. This approach avoids the need to manually develop and verify a separate type inference algorithm, and is ideal for experimentation with and rapid prototyping of novel type systems.

Also somewhat relevant to the discussions here about type checking as abstract interpretation.

A Typed, Compositional Logic for a Stack-Based Abstract Machine

A Typed, Compositional Logic for a Stack-Based Abstract Machine. Nick Benton. MSR-TR-2005-84. June 2005.

We define a compositional program logic in the style of Floyd and Hoare for a simple, typed, stack-based abstract machine with unstructured control flow, global variables and mutually recursive procedure calls. Notable features of the logic include a careful treatment of auxiliary variables and quantification and the use of substructural typing to permit local, modular reasoning about program fragments. Semantic soundness is established using an interpretation of types and assertions defined by orthogonality with respect to sets of contexts.

Also related to PCC, TAL etc.

A Formulae-as-Types Interpretation of Subtractive Logic

A Formulae-as-Types Interpretation of Subtractive Logic

We present a formulae-as-types interpretation of Subtractive Logic (i.e. bi-intuitionistic logic). This presentation is two-fold: we first define a very natural restriction of the lambda-μ-calculus which is closed under
reduction and whose type system is a constructive restriction of the Classical Natural Deduction. Then we extend this deduction system conservatively to Subtractive Logic. From a computational standpoint,
the resulting calculus provides a type system for first-class coroutines (a restricted form of first-class continuations).

Yet another connection between subtractive logic and control. I remember the author mentioned on LtU, but I cannot find any citations.

Generics are a mistake?

Generics Considered Harmful

Ken Arnold, "programmer and author who helped create Jini, JavaSpaces, Curses, and Rogue", writes that the usefulness of generics is outweighed by their complexity. Ken is talking about Java 5, but such critiques are well-known for C++, and C# is not immune either. Ken describes the Java case as follows:

So, I don't know how to ease into this gently. So I'll just spit it out.

Generics are a mistake.

This is not a problem based on technical disagreements. It's a fundamental language design problem.

Any feature added to any system has to pass a basic test: If it adds complexity, is the benefit worth the cost? The more obscure or minor the benefit, the less complexity its worth. Sometimes this is referred to with the name "complexity budget". A design should have a complexity budget to keep its overall complexity under control.

[...]

Which brings up the problem that I always cite for C++: I call it the "Nth order exception to the exception rule." It sounds like this: "You can do x, except in case y, unless y does z, in which case you can if ..."

Humans can't track this stuff. They are always losing which exception to what other exception applies (or doesn't) in any given case.

[...]

Without [an explicit complexity] budget, it feels like the JSR process ran far ahead, without a step back to ask “Is this feature really necessary”. It seemed to just be understood that it was necessary.

It was understood wrong.

The article contains a few simple supporting examples, including the interesting definition of Java 5's Enum type as:

Enum<T extends Enum<T>>

...which "we're assured by the type theorists ... we should simply not think about too much, for which we are grateful."

If we accept the article's premise, here's a question with an LtU spin: do the more elegant, tractable polymorphic inferencing type systems, as found in functional languages, improve on this situation enough to be a viable alternative that could address these complexity problems? In other words, are these problems a selling point for better type systems, or another barrier to adoption?

[Thanks to Perry Metzger for the pointer.]

TypeCase: A Design Pattern for Type-Indexed Functions

Bruno C. d. S. Oliveira and Jeremy Gibbons. TypeCase: A Design Pattern for Type-Indexed Functions. Submitted for publication, June 2005.

A type-indexed function is a function that is defined for each member of some family of types. Haskell's type class mechanism provides open type-indexed functions, in which the indexing family can be extended at any time by defining a new type class instance. The purpose of this paper is to present TypeCase: a design pattern that allows the definition of closed type-indexed functions. It is inspired by Cheney and Hinze's work on lightweight approaches to generic programming. We generalise their techniques as a design pattern. Furthermore, we show that type-indexed functions with type-indexed types, and consequently generic functions with generic types, can also be encoded in a lightweight manner, thereby overcoming one of the main limitations of the lightweight approaches.

A new paper in a field we follow quite closely (i.e., generic programming).

The paper starts with a useful summary of important previous results, which is worth reading even if you don't plan on studying the whole paper.

From shift and reset to polarized linear logic

By now, shift/reset should be as popular as call/cc was ten years ago. Some think these control operators are even more important in practice than call/cc, and should be directly supported by PLs. I believe, this paper by Chung-chieh Shan will be interesting to many who loves logic and Curry-Howard isomorphism.

From shift and reset to polarized linear logic

Abstract:

Griffin pointed out that, just as the pure lambda-calculus corresponds to intuitionistic logic, a lambda-calculus with first-class continuations corresponds to classical logic. We study how first-class delimited continuations, in the form of Danvy and Filinski’s shift and reset operators, can also be logically interpreted. First, we refine Danvy and Filinski’s type system for shift and reset to distinguish between pure and impure functions. This refinement not only paves the way for answer type polymorphism, which makes more terms typable, but also helps us invert the continuation-passing-style (CPS) transform: any pure lambda-term with an appropriate type is beta-eta-equivalent to the CPS transform of some shift-reset expression. We conclude that the lambda-calculus with shift and reset and the pure lambda-calculus have the same logical interpretation, namely good old intuitionistic logic. Second, we mix delimited continuations with undelimited ones. Informed by the preceding conclusion, we translate the lambda-calculus with shift and reset into a polarized variant of linear logic that integrates classical and intuitionistic reasoning. Extending previous work on the lambda-µ-calculus, this unifying intermediate language expresses computations with and without control effects, on delimited and undelimited continuations, in call-by-value and call-byname settings.
XML feed