Type Systems as Macros

Type Systems as Macros, by Stephen Chang, Alex Knauth, Ben Greenman:

We present TURNSTILE, a metalanguage for creating typed embedded languages. To implement the type system, programmers write type checking rules resembling traditional judgment syntax. To implement the semantics, they incorporate elaborations into these rules. TURNSTILE critically depends on the idea of linguistic reuse. It exploits a macro system in a novel way to simultaneously type check and rewrite a surface program into a target language. Reusing a macro system also yields modular implementations whose rules may be mixed and matched to create other languages. Combined with typical compiler and runtime reuse, TURNSTILE produces performant typed embedded languages with little effort.

This looks pretty awesome considering it's not limited to simple typed languages, but extends all the way to System F and F-omega! Even better, they can reuse previous type systems to define new ones, thereby reducing the effort to implement more expressive type systems. All code and further details available here, and here's a blog post where Ben Greenman further discusses the related "type tailoring", and of course, these are both directly related to Active Libraries.

Taken to its extreme, why not have an assembler with a powerful macro system of this sort as your host language, and every high-level language would be built on this. I'm not sure if this approach would extend that far, but it's an interesting idea. You'd need a cpp-like highly portable macro tool, and porting to a new platform consists of writing architecture-specific macros for some core language, like System F.

This work may also conceptually dovetail with another thread discussing fexprs and compilation.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

So, it's in Lisp, huh..

So, it's in Lisp, huh..

is this new?

This article is not original in its main thrust as described in the abstract. The use of judgments to fabricate new types and macros to design new typed languages was anticipated by the open science language Shen which was released in 2011. You'll find a 30 minute video describing Shen on the home page www.shenlanguage.org. Shen was first written in Lisp but now exists in many languages (ports in Java, JVM, JS, Clojure, C, Scheme, CL, Emacs Lisp, Python, Haskell, Ruby).

Prior to that we had Qi (2005) which also used 'judgments' in pursuit of the same goal and if you really want to go back - SEQUEL in 'A Language for Implementing Arbitrary Logics' in International Joint Conference on Artificial Intelligence (1993).
Both done in Lisp.

Anyway if you want to follow some leads and judge for yourselves.

1. The Shen appeal video of 2013 which shows the scope of our work (about 5 minutes). http://shenlanguage.org/appeal.mp4.

2. The Shen introductory video of 2017 which explains Shen w.o. making any background assumptions (about 30 minutes).
See http://shenlanguage.org/. You'll find in that video a program using macros and judgments to form an ML-style
type system. The code is lifted from the Book of Shen (2015).

3. The OS Shen Manual; http://shenlanguage.org/learn-shen/index.html.

4. Really old; but my invited 2008 talk on Qi 'Lisp for the C21' paid for by sponsors. http://shenlanguage.org/l21.wmv.

5. My 2015 essay 'In Defence of Open Science' where I discuss the closed peer review process vs open science in the
context of Wikipedia and the abuses of the traditional system. http://www.marktarver.com/openscience.html

6. Rather old; but a history page of the project from 1990 to 2014. http://shenlanguage.org/history.html

7. For the really keen, "The Book of Shen" (2015) 3rd edition which discusses the whole technology and the algorithms behind it.

As a group we've been around for a long time; we have nearly 500 members. The Shen kernel went OS 2 years ago and we've got
a group of OS enthusiasts who are now doing really well improving and porting it. Go to the news group if you want to learn more and participate in the project.

Mark Tarver

comparison with Shen

Hi, Thanks for your interest.

Our paper presents a method for implementing typed DSLs, particularly type checkers for these DSLs, using only an off-the-shelf macro system, namely Racket's.

Our method is pragmatic since it is compatible with the way Racket programmers already implement (untyped) DSLs, as a series of macro definitions. We show that these DSL creators may add type checking to their language simply by extending their macro definitions with a protocol for communicating the necessary type information. As a result, type checking is computed as part of macro expansion, hence "type systems as macros". Finally, we add a syntactic layer (Turnstile) that enables defining these type-checking macros using a declarative syntax that roughly resembles conventional type rule specifications.

In contrast, Shen appears to already come with a full-featured type system out of the box. Programmers then extend this type system via a special-purpose framework that allows defining new datatypes in a sequent-calculus-style syntax. Thus Shen seems to fall under the "extensible type systems" line of work?

Implementing typed DSLs with extensible type systems is typically more restrictive than our approach because the base type system constrains both the features that *may* be added and the features that *must* be included in any DSL. Shen's extension framework seems sufficiently expressive such that the former may not be much of problem, but I think the latter still might be an issue? In other words, can a DSL writer take away unwanted type system features? For example, can one reuse Shen's type system to create a simply-typed DSL where polymorphism is not allowed? How about defining a new function application rule for a DSL with a type-and-effect system? Or how about a DSL with completely different base types, e.g., Int for integers instead of Number?

Implementationwise, neither Shen's base type system nor its extension rules appear to desugar into plain macro definitions. Instead, I think they correspond to rules in an underlying logic language? In short, I do not see "type systems as macros" in any of the linked examples and thus I do not understand the similarity assertions. Would "type extensions as logic programs" be a more accurate description of Shen?


Isn't this basically showing that in Racket macro matching is essentially unification, which is the principal operation in most type systems?

Effectively the macros form a committed choice logic language.

Not unification

No, Racket's syntax matching is not a unification algorithm. Culpepper and Felleisen's ICFP 2010 paper has more details if you are interested.


If you can Construct a functioning type system then matching must be equivalent to unification, otherwise how can the type system formed be a consistent logic? Or am I reading too much into this paper? Have you simply built a unification algorithm as a meta-program? In other words macros form a Turing complete compile time language, so they can implement any program, one of which is type inference/checking. You then use reflection to type the target program?

more than matching

Yes, we need more than matching. You're right that any sufficiently expressive metaprogramming framework can be used to implement a type checker. Our focus is more on the pragmatics. We want everyday programmers to be able to create typed DSLs if they want. Ours is just one possible approach, though preliminary feedback from users has been encouraging.