Type Theory

Natural Deduction for Intuitionistic Non-Commutative Linear Logic

Natural Deduction for Intuitionistic Non-Commutative Linear Logic, Jeff Polakow and Frank Pfenning. TLCA 1999.

Intuitionistic logic captures functional programming in a logical way, as can be seen from the Curry-Howard isomorphism between constructive proofs and functional programs. However, there are many structural properties of programs that are not captured within the intuitionistic framework, such as resource usage, computational complexity, and sequentiality. Intuitionistic linear logic can be thought of as a refinement of intuitionistic logic in which the resource consumption properties of functions can be expressed internally. Here, we further refine it to allow the expression of sequencing of computations. We achieve this by controlling the use of the structural rule of exchange to arrive at intuitionistic non-commutative linear logic.

My earlier post on linguistics reminded me of the Lambek calculus, which is an ordered logic invented in 1958(!) to model how to parse sentences. So I wanted to find a paper on ordered logic (ie, you can't freely swap the order of hypotheses in a context) and link to that.

Shape Analysis with Structural Invariant Checkers

Shape Analysis with Structural Invariant Checkers, Bor-Yuh Evan Chang, Xavier Rival, and George C. Necula. SAS 2007.

Developer-supplied data structure specifications are important to shape analyses, as they tell the analysis what information should be tracked in order to obtain the desired shape invariants. We observe that data structure checking code (e.g., used in testing or dynamic analysis) provides shape information that can also be used in static analysis. In this paper, we propose a lightweight, automatic shape analysis based on these developer-supplied structural invariant checkers. In particular, we set up a parametric abstract domain, which is instantiated with such checker specifications to summarize memory regions using both notions of complete and partial checker evaluations. The analysis then automatically derives a strategy for canonicalizing or weakening shape invariants.

I saw a talk by Evan, and thought this approach is fundamentally the right way to go, because the proper sharing behavior of heap data is information that's integral to the design and not at all in the code. So programmers should be able to supply this information to their analyses.

Compositional type systems for stack-based low-level languages

Compositional type systems for stack-based low-level languages
by Ando Saabas and Tarmo Uustalu, appearing in Proceedings of the 12th Computing: The Australasian Theroy Symposium - 2006.

It is widely believed that low-level languages with jumps must be difficult to reason about by being inherently non-modular. We have recently argued that this in untrue and proposed a novel method for developing compositional natural semantics and Hoare logics for low-level languages and demonstrated its viability on the example of a simple low-level language with expressions (Saabas & Uustalu 2005). The central idea is to use the implicit structure of finite disjoint unions present in low-level code as an (ambiguous) phrase structure. Here we apply our method to a stack-based language and develop it further. We define a compositional natural semantics and Hoare logic for this language and go then on to show that, in addition to Hoare logics, one can also derive compositional type systems as weaker specification languages with the same method. We describe type systems for stack-error freedom and secure information flow.

I encountered the article while researching compositional type systems. I would like to hear people's thoughts about this article and compositional type-systems in general (are they interesting, esoteric, mundane?).

Tagless Staged Interpreters for Simpler Typed Languages

Finally Tagless, Partially Evaluated, Tagless Staged Interpreters for Simpler Typed Languages.
Jacques Carette, Oleg Kiselyov, and Chung-chieh Shan.

We have built the first family of tagless interpretations for a higher-order typed object language in a typed metalanguage (Haskell or ML) that require no dependent types, generalized algebraic data types, or postprocessing to eliminate tags. The statically type-preserving interpretations include an evaluator, a compiler (or staged evaluator), a partial evaluator, and call-by-name and call-by-value CPS transformers.
Our main idea is to encode HOAS using cogen functions rather than data constructors. In other words, we represent object terms not in an initial algebra but using the coalgebraic structure of the LC. Our representation also simulates inductive maps from types to types, which are required for typed partial evaluation and CPS transformations.

Oleg explains:

It seems like a common wisdom that an embedding of a typed object language (e.g., DSL) to a typed meta-language so that all and only typed object terms can be represented requires dependent types, GADTs or other such advanced type systems. In fact, this problem of writing (tagless) type-preserving typed interpreters has been the motivation for most of the papers on GADTs. We show that regardless of merits and conveniences of GADTs, type-preserving typed interpretation can be achieved with no GADTs whatsoever, using very simple type systems of ML or Haskell98. We also show the same approach lets us perform statically type-preserving partial evaluation and call-by-value or call-by-name CPS tansformations. The latter transformations, too, are often claimed impossible in Haskell98 or ML - requiring instead quite advanced type systems or language features.

The complete (Meta)OCaml and Haskell code accompanying the paper is
available (see readme).

One of features of our approach is writing the DSL code in a form that can be interpreted in multiple ways. Recently we have become aware the very same approach underlies `abstract categorial grammars' (ACG) in linguistics. Chung-chieh Shan has written an extensive article on this correspondence. That post itself can be interpreted in several ways: the file can be read as plain text, or it can be loaded as it is in Haskell or OCaml interpreters.

It should be noted that the linguistic terms `tectogrammatics' and `phenogrammatics' were coined by none else but Haskell Curry, in his famous 1961 paper 'Some Logical Aspects of Grammatical Structure'. The summary of the ESSLLI workshop describes further connections to linear lambda-calculus.

The paper has been accepted for APLAS; the authors appreciate any comments indeed.

Morphing: Safely Shaping a Class in the Image of Others

Morphing: Safely Shaping a Class in the Image of Others, Shan Shan Huang, David Zook, and Yannis Smaragdakis. ECOOP 2007.

We present MJ: a language for specifying general classes whose members are produced by iterating over members of other classes. We call this technique “class morphing” or just “morphing”. Morphing extends the notion of genericity so that not only types of methods and fields, but also the structure of a class can vary according to type variables. This offers the ability to express common programming patterns in a highly generic way that is otherwise not supported by conventional techniques. For instance, morphing lets us write generic proxies (i.e., classes that can be parameterized with another class and export the same public methods as that class); default implementations (e.g., a generic do-nothing type, configurable for any interface); semantic extensions (e.g., specialized behavior for methods that declare a certain annotation); and more. MJ’s hallmark feature is that, despite its emphasis on generality, it allows modular type checking: an MJ class can be checked independently of its uses. Thus, the possibility of supplying a type parameter that will lead to invalid code is detected early—an invaluable feature for highly general components that will be statically instantiated by other programmers.

This is related to the paper by Turon and Reppy I linked to yesterday; it's another take on compile-time metaprogramming. This time they have a static iteration over the structure of a class, which makes their safety result rather impressive. I talked Huang, and she told me that their big-picture goal is to eventually build a full language for manipulating programs at compile time, while still preserving all the safety guarantees we expect.

Metaprogramming with Traits

Metaprogramming with Traits, Aaron Turon and John Reppy. ECOOP 2007

In many domains, classes have highly regular internal structure. For example, so-called business objects often contain boilerplate code for mapping database fields to class members. The boilerplate code must be repeated per-field for every class, because existing mechanisms for constructing classes do not provide a way to capture and reuse such member-level structure. As a result, programmers often resort to ad ho code generation. This paper presents a lightweight mechanism for specifying and reusing member-level structure in Java programs. The proposal is based on a modest extension to traits that we have termed trait-based metaprogramming. Although the semantics of the mechanism are straightforward, its type theory is difficult to reconcile with nominal subtyping. We achieve reconciliation by introducing a hybrid structural/nominal type system that extends Java's type system. The paper includes a formal calculus defined by translation to Featherweight Generic Java.

This paper explains how to scratch an itch I've had for a long, long time -- uniformly generating groups of fields and methods, including computation of the field/method names. Something like this would be quite useful in an ML-like language's module system, too.

CLL: A Concurrent Language Built from Logical Principles

CLL: A Concurrent Language Built from Logical Principles by Deepak Garg, 2005.
In this report, we use both the Curry-Howard isomorphism and proof-search to design a concurrent programming language from logical principles. ... Our underlying logic is a first-order intuitionistic linear logic where all right synchronous connectives are restricted to a monad.
Yet another example of using monads to embed effectful computations into a pure FP. Another interesting part is the methodology of derivation of a PL from a logic.

Combining Total and Ad Hoc Extensible Pattern Matching in a Lightweight Language Extension

Combining Total and Ad Hoc Extensible Pattern Matching in a Lightweight Language Extension. Don Syme, Gregory Neverov and James Margetson.

Pattern matching of algebraic data types (ADTs) is a standard feature in typed functional programming languages but it is well known that it interacts poorly with abstraction. While several partial solutions to this problem have been proposed, few have been implemented or used. This paper describes an extension to the .NET language F# called ``Active Patterns'', which supports pattern matching over abstract representations of generic heterogeneous data such as XML and term structures, including where these are represented via object models in other .NET languages. Our design is the first to incorporate both ad hoc pattern matching functions for partial decompositions and ``views'' for total decompositions, and yet remains a simple and lightweight extension. We give a description of the language extension along with numerous motivating examples. Finally we describe how this feature would interact with other reasonable and related language extensions: existential types quantified at data discrimination tags, GADTs, and monadic generalizations of pattern matching.

Quite related to the recent discussions of the relationships between libraries, frameworks and language features.

From abstract interpretation to small-step typing

Patrick Cousot is well known for abstract interpretation, which among other applications offers a general approach to understanding and designing type systems. It thus surprises me that his work seems to have been discussed only cursorily on LtU.

Patrick Cousot and Radhia Cousot, Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. POPL 1977, 238-252.

A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some informations on the actual computations. An intuitive example (which we borrow from Sintzoff) is the rule of signs. The text -1515*17 may be undestood to denote computations on the abstract universe {(+), (-), (+-)} where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515*17 ==> -(+)*(+) ==> (-)*(+) ==> (-), proves that -1515+17 is a negative number. Abstract interpretation is concerned by a particlar underlying structure of the usual universe of computations (the sign, in our example). It gives a summay of some facets of the actual executions of a program. In general this summary is simple to obtain but inacurrate (e.g. -1515+17 ==> -(+)+(+) ==> (-)+(+) ==> (+-)). Despite its fundamental incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, ...).
Patrick Cousot, Types as abstract interpretations. POPL 1997, 316-331.
Starting from a denotational semantics of the eager untyped lambda-calculus with explicit runtime errors, the standard collecting semantics is defined as specifying the strongest program properties. By a first abstraction, a new sound type collecting semantics is derived in compositional fixpoint form. Then by successive (semi-dual) Galois connection based abstractions, type systems and/or type inference algorithms are designed as abstract semantics or abstract interpreters approximating the type collecting semantics. This leads to a hierarchy of type systems, which is part of the lattice of abstract interpretations of the untyped lambda-calculus. This hierarchy includes two new à la Church/Curry polytype systems. Abstractions of this polytype semantics lead to classical Milner/Mycroft and Damas/Milner polymorphic type schemes, Church/Curry monotypes and Hindley principal typing algorithm. This shows that types are abstract interpretations.

I started learning more about abstract interpretation for two reasons. First, abstract interpretation underlies CiaoPP, a program preprocessor mentioned here that uses the same assertion language for static and dynamic checking and optimization. Second, on our way to a logical interpretation of delimited continuations, Oleg and I build a type system using what we call small-step abstract interpretation.

Manuel Hermenegildo, Germán Puebla, and Francisco Bueno, Using global analysis, partial specifications, and an extensible assertion language for program validation and debugging. In The logic programming paradigm: A 25-year perspective, ed. Krzysztof R. Apt, Victor W. Marek, Mirek Truszczynski, and David S. Warren, 161-192, 1999. Berlin: Springer-Verlag.

We present a framework for the application of abstract interpretation as an aid during program development, rather than in the more traditional application of program optimization. Program validation and detection of errors is first performed statically by comparing (partial) specifications written in terms of assertions against information obtained from static analysis of the program. The results of this process are expressed in the user assertion language. Assertions (or parts of assertions) which cannot be verified statically are translated into run-time tests. The framework allows the use of assertions to be optional. It also allows using very general properties in assertions, beyond the predefined set understandable by the static analyzer and including properties defined by means of user programs. We also report briefly on an implemention of the framework. The resulting tool generates and checks assertions for Prolog, CLP(R), and CHIP/CLP(fd) programs, and integrates compile-time and run-time checking in a uniform way. The tool allows using properties such as types, modes, non-failure, determinancy, and computational cost, and can treat modules separately, performing incremental analysis. In practice, this modularity allows detecting statically bugs in user programs even if they do not contain any assertions.
Oleg Kiselyov and Chung-chieh Shan, A substructural type system for delimited continuations. TLCA 2007.
We propose type systems that abstractly interpret small-step rather than big-step operational semantics. We treat an expression or evaluation context as a structure in a linear logic with hypothetical reasoning. Evaluation order is not only regulated by familiar focusing rules in the operational semantics, but also expressed by structural rules in the type system, so the types track control flow more closely. Binding and evaluation contexts are related, but the latter are linear.

We use these ideas to build a type system for delimited continuations. It lets control operators change the answer type or act beyond the nearest dynamically-enclosing delimiter, yet needs no extra fields in judgments and arrow types to record answer types. The typing derivation of a direct-style program desugars it into continuation-passing style.

PCF and LCF

The theory LCF, for Logic of Computable Functions, was proposed by Dana Scott as a theory in the spirit of Church's simple theory of types which allows the computable functions to be characterised, and which comes with a rich enough logical theory to be analytically useful.

PCF, the Programming with Computable Functions toy language, was proposed by Robin Milner in an exposition of Scott's work on LCF.

The two formalisms have been enormously fruitful in theoretical computer science, and I've noticed that two of the early seminal papers are available as scanned PDFs:

Dana Scott's original 1969 manuscript, A theory of computable functions of higher type was, I believe, only ever circulated informally, and I've never got hold of the manuscript in any form.

XML feed