Type Theory

Type Systems as Macros

Type Systems as Macros, by Stephen Chang, Alex Knauth, Ben Greenman:

We present TURNSTILE, a metalanguage for creating typed embedded languages. To implement the type system, programmers write type checking rules resembling traditional judgment syntax. To implement the semantics, they incorporate elaborations into these rules. TURNSTILE critically depends on the idea of linguistic reuse. It exploits a macro system in a novel way to simultaneously type check and rewrite a surface program into a target language. Reusing a macro system also yields modular implementations whose rules may be mixed and matched to create other languages. Combined with typical compiler and runtime reuse, TURNSTILE produces performant typed embedded languages with little effort.

This looks pretty awesome considering it's not limited to simple typed languages, but extends all the way to System F and F-omega! Even better, they can reuse previous type systems to define new ones, thereby reducing the effort to implement more expressive type systems. All code and further details available here, and here's a blog post where Ben Greenman further discusses the related "type tailoring", and of course, these are both directly related to Active Libraries.

Taken to its extreme, why not have an assembler with a powerful macro system of this sort as your host language, and every high-level language would be built on this. I'm not sure if this approach would extend that far, but it's an interesting idea. You'd need a cpp-like highly portable macro tool, and porting to a new platform consists of writing architecture-specific macros for some core language, like System F.

This work may also conceptually dovetail with another thread discussing fexprs and compilation.

Contextual isomorphisms

Contextual Isomorphisms
Paul Blain Levy

What is the right notion of "isomorphism" between types, in a simple type theory? The traditional answer is: a pair of terms that are inverse, up to a specified congruence. We firstly argue that, in the presence of effects, this answer is too liberal and needs to be restricted, using Führmann’s notion of thunkability in the case of value types (as in call-by-value), or using Munch-Maccagnoni’s notion of linearity in the case of computation types (as in call-by-name). Yet that leaves us with different notions of isomorphism for different kinds of type.

This situation is resolved by means of a new notion of “contextual” isomorphism (or morphism), analogous at the level of types to contextual equivalence of terms. A contextual morphism is a way of replacing one type with the other wherever it may occur in a judgement, in a way that is preserved by the action of any term with holes. For types of pure λ-calculus, we show that a contextual morphism corresponds to a traditional isomorphism. For value types, a contextual morphism corresponds to a thunkable isomorphism, and for computation types, to a linear isomorphism.

This paper is based on a very simple idea that everyone familiar with type-systems can enjoy. It then applies it in a technical setting in which it brings a useful contribution. I suspect that many readers will find that second part too technical, but they may still enjoy the idea itself. To facilitate this, I will rephrase the abstract above in a way that I hope makes it accessible to a larger audience.

The problem that the paper solves is: how do we know what it means for two types to be equivalent? For many languages they are reasonable definitions of equivalence (such that: there exists a bijection between these two types in the language), but for more advanced languages these definitions break down. For example, in presence of hidden mutable state, one can build a pair of functions from the one-element type unit to the two-element type bool and back that are the identity when composed together -- the usual definition of bijection -- while these two types should probably not be considered "the same". Those two functions share some hidden state, so they "cheat". Other, more complex notions of type equivalence have been given in the literature, but how do we know whether they are the "right" notions, or whether they may disappoint us in the same way?

To define what it means for two program fragments to be equivalent, we have a "gold standard", which is contextual equivalence: two program fragments are contextually equivalent if we can replace one for the other in any complete program without changing its behavior. This is simple to state, it is usually clear how to instantiate this definition for a new system, and it gives you a satisfying notion of equivalent. It may not be the most convenient one to work with, so people define others, more specific notions of equivalence (typically beta-eta-equivalence or logical relations); it is fine if they are more sophisticated, and their definiton harder to justify or understand, because they can always be compared to this simple definition to gain confidence.

The simple idea in the paper above is to use this exact same trick to define what it means for two types to be equivalent. Naively, one could say that two types are equivalent if, in any well-typed program, one can replace some occurrences of the first type by occurrences of the second type, all other things being unchanged. This does not quite work, as changing the types that appear in a program without changing its terms would create ill-typed terms. So instead, the paper proposes that two types are equivalent when we are told how to transform any program using the first type into a program using the second type, in a way that is bijective (invertible) and compositional -- see the paper for details.

Then, the author can validate this definition by showing that, when instantiated to languages (simple or complex) where existing notions of equivalence have been proposed, this new notion of equivalence corresponds to the previous notions.

(Readers may find that even the warmup part of the paper, namely the sections 1 to 4, pages 1 to 6, are rather dense, with a compactly exposed general idea and arguably a lack of concrete examples that would help understanding. Surely this terseness is in large part a consequence of strict page limits -- conference articles are the tweets of computer science research. A nice side-effect (no pun intended) is that you can observe a carefully chosen formal language at work, designed to expose the setting and perform relevant proofs in minimal space: category theory, and in particular the concept of naturality, is the killer space-saving measure here.)

Polymorphism, subtyping and type inference in MLsub

I am very enthusiastic about the following paper: it brings new ideas and solves a problem that I did not expect to be solvable, namely usable type inference when both polymorphism and subtyping are implicit. (By "usable" here I mean that the inferred types are both compact and principal, while previous work generally had only one of those properties.)

Polymorphism, Subtyping, and Type Inference in MLsub
Stephen Dolan and Alan Mycroft

We present a type system combining subtyping and ML-style parametric polymorphism. Unlike previous work, our system supports type inference and has compact principal types. We demonstrate this system in the minimal language MLsub, which types a strict superset of core ML programs.

This is made possible by keeping a strict separation between the types used to describe inputs and those used to describe outputs, and extending the classical unification algorithm to handle subtyping constraints between these input and output types. Principal types are kept compact by type simplification, which exploits deep connections between subtyping and the algebra of regular languages. An implementation is available online.

The paper is full of interesting ideas. For example, one idea is that adding type variables to the base grammar of types -- instead of defining them by their substitution -- forces us to look at our type systems in ways that are more open to extension with new features. I would recommend looking at this paper even if you are interested in ML and type inference, but not subtyping, or in polymorphism and subtyping, but not type inference, or in subtyping and type inference, but not functional languages.

This paper is also a teaser for the first's author PhD thesis, Algebraic Subtyping. There is also an implementation available.

(If you are looking for interesting work on inference of polymorphism and subtyping in object-oriented languages, I would recommend Getting F-Bounded Polymorphism into Shape by Ben Greenman, Fabian Muehlboeck and Ross Tate, 2014.)

Proving Programs Correct Using Plain Old Java Types

Proving Programs Correct Using Plain Old Java Types, by Radha Jagadeesan, Alan Jeffrey, Corin Pitcher, James Riely:

Tools for constructing proofs of correctness of programs have a long history of development in the research community, but have often faced difficulty in being widely deployed in software development tools. In this paper, we demonstrate that the off-the-shelf Java type system is already powerful enough to encode non-trivial proofs of correctness using propositional Hoare preconditions and postconditions.

We illustrate the power of this method by adapting Fähndrich and Leino’s work on monotone typestates and Myers and Qi’s closely related work on object initialization. Our approach is expressive enough to address phased initialization protocols and the creation of cyclic data structures, thus allowing for the elimination of null and the special status of constructors. To our knowledge, our system is the first that is able to statically validate standard one-pass traversal algorithms for cyclic graphs, such as those that underlie object deserialization. Our proof of correctness is mechanized using the Java type system, without any extensions to the Java language.

Not a new paper, but it provides a lightweight verification technique for some program properties that you can use right now, without waiting for integrated theorem provers or SMT solvers. Properties that require only monotone typestates can be verified, ie. those that operations can only move the typestate "forwards".

In order to achieve this, they require programmers to follow a few simple rules to avoid Java's pervasive nulls. These are roughly: don't assign null explicitly, be sure to initialize all fields when constructing objects.

Fully Abstract Compilation via Universal Embedding

Fully Abstract Compilation via Universal Embedding by Max S. New, William J. Bowman, and Amal Ahmed:

A fully abstract compiler guarantees that two source components are observationally equivalent in the source language if and only if their translations are observationally equivalent in the target. Full abstraction implies the translation is secure: target-language attackers can make no more observations of a compiled component than a source-language attacker interacting with the original source component. Proving full abstraction for realistic compilers is challenging because realistic target languages contain features (such as control effects) unavailable in the source, while proofs of full abstraction require showing that every target context to which a compiled component may be linked can be back-translated to a behaviorally equivalent source context.

We prove the first full abstraction result for a translation whose target language contains exceptions, but the source does not. Our translation—specifically, closure conversion of simply typed λ-calculus with recursive types—uses types at the target level to ensure that a compiled component is never linked with attackers that have more distinguishing power than source-level attackers. We present a new back-translation technique based on a deep embedding of the target language into the source language at a dynamic type. Then boundaries are inserted that mediate terms between the untyped embedding and the strongly-typed source. This technique allows back-translating non-terminating programs, target features that are untypeable in the source, and well-bracketed effects.

Potentially a promising step forward to secure multilanguage runtimes. We've previously discussed security vulnerabilities caused by full abstraction failures here and here. The paper also provides a comprehensive review of associated literature, like various means of protection, back translations, embeddings, etc.

Set-Theoretic Types for Polymorphic Variants

Set-Theoretic Types for Polymorphic Variants by Giuseppe Castagna, Tommaso Petrucciani, and Kim Nguyễn:

Polymorphic variants are a useful feature of the OCaml language whose current definition and implementation rely on kinding constraints to simulate a subtyping relation via unification. This yields an awkward formalization and results in a type system whose behaviour is in some cases unintuitive and/or unduly restrictive.

In this work, we present an alternative formalization of polymorphic variants, based on set-theoretic types and subtyping, that yields a cleaner and more streamlined system. Our formalization is more expressive than the current one (it types more programs while preserving type safety), it can internalize some meta-theoretic properties, and it removes some pathological cases of the current implementation resulting in a more intuitive and, thus, predictable type system. More generally, this work shows how to add full-fledged union types to functional languages of the ML family that usually rely on the Hindley-Milner type system. As an aside, our system also improves the theory of semantic subtyping, notably by proving completeness for the type reconstruction algorithm.

Looks like a nice result. They integrate union types and restricted intersection types for complete type inference, which prior work on CDuce could not do. The disadvantage is that it does not admit principal types, and so inference is non-deterministic (see section 5.3.2).

No value restriction is needed for algebraic effects and handlers

No value restriction is needed for algebraic effects and handlers, by Ohad Kammar and Matija Pretnar:

We present a straightforward, sound Hindley-Milner polymorphic type system for algebraic effects and handlers in a call-by-value calculus, which allows type variable generalisation of arbitrary computations, not just values. This result is surprising. On the one hand, the soundness of unrestricted call-by-value Hindley-Milner polymorphism is known to fail in the presence of computational effects such as reference cells and continuations. On the other hand, many programming examples can be recast to use effect handlers instead of these effects. Analysing the expressive power of effect handlers with respect to state effects, we claim handlers cannot express reference cells, and show they can simulate dynamically scoped state.

Looks like a nice integration of algebraic effects with simple Hindly-Milner, but which yields some unintuitive conclusions. At least I certainly found the possibility of supporting dynamically scoped state but not reference cells surprising!

It highlights the need for some future work to support true reference cells, namely a polymorphic type and effect system to generate fresh instances.

Type Checking Modular Multiple Dispatch with Parametric Polymorphism and Multiple Inheritance

Type Checking Modular Multiple Dispatch with Parametric Polymorphism and Multiple Inheritance by Eric Allen, Justin Hilburn, Scott Kilpatrick, Victor Luchangco, Sukyoung Ryu, David Chase, Guy L. Steele Jr.:

In previous work, we presented rules for defining overloaded functions that ensure type safety under symmetric multiple dispatch in an object-oriented language with multiple inheritance, and we showed how to check these rules without requiring the entire type hierarchy to be known, thus supporting modularity and extensibility. In this work, we extend these rules to a language that supports parametric polymorphism on both classes and functions.

In a multiple-inheritance language in which any type may be extended by types in other modules, some overloaded functions that might seem valid are correctly rejected by our rules. We explain how these functions can be permitted in a language that additionally supports an exclusion relation among types, allowing programmers to declare “nominal exclusions” and also implicitly imposing exclusion among different instances of each polymorphic type. We give rules for computing the exclusion relation, deriving many type exclusions from declared and implicit ones.

We also show how to check our rules for ensuring the safety of overloaded functions. In particular, we reduce the problem of handling parametric polymorphism to one of determining subtyping relationships among universal and existential types. Our system has been implemented as part of the open-source Fortress compiler.

Fortress was briefly covered here a couple of times, as were multimethods and multiple dispatch, but this paper really generalizes and nicely summarizes previous work on statically typed modular multimethods, and does a good job explaining the typing rules in an accessible way. The integration with parametric polymorphism I think is key to applying multimethods in other domains which may want modular multimethods, but not multiple inheritance.

The Formalization in COQ might also be of interest to some.

Also, another interesting point is Fortress' use of second-class intersection and union types to simplify type checking.

Breaking Through the Normalization Barrier: A Self-Interpreter for F-omega

Breaking Through the Normalization Barrier: A Self-Interpreter for F-omega, by Matt Brown and Jens Palsberg:

According to conventional wisdom, a self-interpreter for a strongly normalizing λ-calculus is impossible. We call this the normalization barrier. The normalization barrier stems from a theorem in computability theory that says that a total universal function for the total computable functions is impossible. In this paper we break through the normalization barrier and define a self-interpreter for System Fω, a strongly normalizing λ-calculus. After a careful analysis of the classical theorem, we show that static type checking in Fω can exclude the proof’s diagonalization gadget, leaving open the possibility for a self-interpreter. Along with the self-interpreter, we program four other operations in Fω, including a continuation-passing style transformation. Our operations rely on a new approach to program representation that may be useful in theorem provers and compilers.

I haven't gone through the whole paper, but their claims are compelling. They have created self-interpreters in System F, System Fω and System Fω+, which are all strongly normalizing typed languages. Previously, the only instance of this for a typed language was Girard's System U, which is not strongly normalizing. The key lynchpin appears in this paragraph on page 2:

Our result breaks through the normalization barrier. The conventional wisdom underlying the normalization barrier makes an implicit assumption that all representations will behave like their counterpart in the computability theorem, and therefore the theorem must apply to them as well. This assumption excludes other notions of representation, about which the theorem says nothing. Thus, our result does not contradict the theorem, but shows that the theorem is less far-reaching than previously thought.

Pretty cool if this isn't too complicated in any given language. Could let one move some previously non-typesafe runtime features, into type safe libraries.

Dependent Types for Low-Level Programming

Dependent Types for Low-Level Programming by Jeremy Condit, Matthew Harren, Zachary Anderson, David Gay, and George C. Necula:

In this paper, we describe the key principles of a dependent type system for low-level imperative languages. The major contributions of this work are (1) a sound type system that combines dependent types and mutation for variables and for heap-allocated structures in a more flexible way than before and (2) a technique for automatically inferring dependent types for local variables. We have applied these general principles to design Deputy, a dependent type system for C that allows the user to describe bounded pointers and tagged unions. Deputy has been used to annotate and check a number of real-world C programs.

A conceptually simple approach to verifying the safety of C programs, which proceeeds in 3 phases: 1. infer locals that hold pointer bounds, 2. flow-insensitive checking introduces runtime assertions using these locals, 3. flow-sensitive optimization that removes the assertions that it can prove always hold.

You're left with a program that ensures runtime safety with as few runtime checks as possible, and the resulting C program is compiled with gcc which can perform its own optimizations.

This work is from 2007, and the project grew into the Ivy language, which is a C dialect that is fully backwards compatible with C if you #include a small header file that includes the extensions.

It's application to C probably won't get much uptake at this point, but I can see this as a useful compiler plugin to verify unsafe Rust code.

XML feed