Type Theory
Breaking Through the Normalization Barrier: A SelfInterpreter for Fomega, by Matt Brown and Jens Palsberg:
According to conventional wisdom, a selfinterpreter for a strongly normalizing λcalculus is impossible. We call this the normalization barrier. The normalization barrier stems from a theorem in computability theory that says that a total universal function for the total computable functions is impossible. In this paper we break through the normalization barrier and define a selfinterpreter for System Fω, a strongly normalizing λcalculus. After a careful analysis of the classical theorem, we show that static type checking in Fω can exclude the proof’s diagonalization gadget, leaving open the possibility for a selfinterpreter. Along with the selfinterpreter, we program four other operations in Fω, including a continuationpassing style transformation. Our operations rely on a new approach to program representation that may be useful in theorem provers and compilers.
I haven't gone through the whole paper, but their claims are compelling. They have created selfinterpreters in System F, System Fω and System Fω+, which are all strongly normalizing typed languages. Previously, the only instance of this for a typed language was Girard's System U, which is not strongly normalizing. The key lynchpin appears in this paragraph on page 2:
Our result breaks through the normalization barrier. The conventional wisdom underlying the normalization barrier makes an implicit assumption that all representations will behave like their counterpart in the computability theorem, and therefore the theorem must apply to them as well. This assumption excludes other notions of representation, about which the theorem says nothing. Thus, our result does not contradict the theorem, but shows that the theorem is less farreaching than previously thought.
Pretty cool if this isn't too complicated in any given language. Could let one move some previously nontypesafe runtime features, into type safe libraries.
Dependent Types for LowLevel Programming by Jeremy Condit, Matthew Harren, Zachary Anderson, David Gay, and George C. Necula:
In this paper, we describe the key principles of a dependent type system for lowlevel imperative languages. The major contributions of this work are (1) a sound type system that combines dependent types and mutation for variables and for heapallocated structures in a more flexible way than before and (2) a technique for automatically inferring dependent types for local variables. We have applied these general principles to design Deputy, a dependent type system for C that allows the user to describe bounded pointers and tagged unions. Deputy has been used to annotate and check a number of realworld C programs.
A conceptually simple approach to verifying the safety of C programs, which proceeeds in 3 phases: 1. infer locals that hold pointer bounds, 2. flowinsensitive checking introduces runtime assertions using these locals, 3. flowsensitive optimization that removes the assertions that it can prove always hold.
You're left with a program that ensures runtime safety with as few runtime checks as possible, and the resulting C program is compiled with gcc which can perform its own optimizations.
This work is from 2007, and the project grew into the Ivy language, which is a C dialect that is fully backwards compatible with C if you #include a small header file that includes the extensions.
It's application to C probably won't get much uptake at this point, but I can see this as a useful compiler plugin to verify unsafe Rust code.
Freer Monads, More Extensible Effects, by Oleg Kiselyov and Hiromi Ishii:
We present a rational reconstruction of extensible effects, the recently proposed alternative to monad transformers, as the confluence of efforts to make effectful computations compose. Free monads and then extensible effects emerge from the straightforward term representation of an effectful computation, as more and more boilerplate is abstracted away. The generalization process further leads to freer monads, constructed without the Functor constraint.
The continuation exposed in freer monads can then be represented as an efficient typealigned data structure. The end result is the algorithmically efficient extensible effects library, which is not only more comprehensible but also faster than earlier implementations. As an illustration of the new library, we show three surprisingly simple applications: nondeterminism with committed choice (LogicT), catching IO exceptions in the presence of other effects, and the semiautomatic management of file handles and other resources through monadic regions.
We extensively use and promote the new sort of ‘laziness’, which underlies the left Kan extension: instead of performing an operation, keep its operands and pretend it is done.
This looks very promising, and includes some benchmarks comparing the heavily optimized and specialcased monad transformers against this new formulation of extensible effects using Freer monads.
See also the reddit discussion.
SelfRepresentation in Girard’s System U, by Matt Brown and Jens Palsberg:
In 1991, Pfenning and Lee studied whether System F could support a typed selfinterpreter. They concluded that typed selfrepresentation for System F “seems to be impossible”, but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kindpolymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress?
We show that it is not and present a typed selfrepresentation for Girard’s System U, the first for a λcalculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to “tie the knot” – they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our selfrepresentation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed selfapplicable operations: a selfinterpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuationpassingstyle (CPS) transformation – the first typed selfapplicable CPS transformation. Our techniques could have applications from verifiably typepreserving metaprograms, to growable typed languages, to more efficient selfinterpreters.
Typed selfrepresentation has come up here on LtU in the past. I believe the best selfinterpreter available prior to this work was a variant of Barry Jay's SFcalculus, covered in the paper Typed SelfInterpretation by Pattern Matching (and more fully developed in Structural Types for the Factorisation Calculus). These covered statically typed selfinterpreters without resorting to undecidable type:type rules.
However, being combinator calculi, they're not very similar to most of our programming languages, and so selfinterpretation was still an active problem. Enter Girard's System U, which features a more familiar type system with only kind * and kindpolymorphic types. However, System U is not strongly normalizing and is inconsistent as a logic. Whether selfinterpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem.
John Corcoran, Secondorder logic explained in plain English, in Logic, Meaning and Computation: Essays in Memory of Alonzo Church, ed. Anderson and Zelëny.
There is something a little bit Guy Steeleish about trying to explain the fundamentals of secondorder logic (SOL, the logic that Quine branded as set theory in sheep's clothing) and its model theory while avoiding any formalisation. This paper introduces the ideas of SOL via looking at logics with finite, countable and uncountable models, and then talks about FOL and SOL as being complementary approaches to axiomatisation that are each deficient by themself. He ends with a plea for SOL as being an essential tool at least as a heuristic.
The Next Stage of Staging, by Jun Inoue, Oleg Kiselyov, Yukiyoshi Kameyama:
This position paper argues for typelevel metaprogramming, wherein types and type declarations are generated in addition to program terms. Termlevel metaprogramming, which allows manipulating expressions only, has been extensively studied in the form of staging, which ensures static type safety with a clean semantics with hygiene (lexical scoping). However, the corresponding development is absent for type manipulation. We propose extensions to staging to cover MLstyle module generation and show the possibilities they open up for type specialization and overheadfree parametrization of data types equipped with operations. We outline the challenges our proposed extensions pose for semantics and type safety, hence offering a starting point for a longterm program in the next stage of staging research. The key observation is that type declarations do not obey scoping rules as variables do, and that in metaprogramming, types are naturally prone to escaping the lexical environment in which they were declared. This sets nextstage staging apart from dependent types, whose benefits and implementation mechanisms overlap with our proposal, but which does not deal with typedeclaration generation. Furthermore, it leads to an interesting connection between staging and the logic of definitions, adding to the study’s theoretical significance.
A position paper describing the next logical progression of staging to metaprogramming over types. Now with the true firstclass modules of 1ML, perhaps there's a clearer way forward.
In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:
Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under timeshifting.
In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.
Earlier this week Microsoft Research Cambridge organised a Festschrift for Luca Cardelli. The preface from the book:
Luca Cardelli has made exceptional contributions to the world of programming
languages and beyond. Throughout his career, he has reinvented himself every
decade or so, while continuing to make true innovations. His achievements span
many areas: software; language design, including experimental languages;
programming language foundations; and the interaction of programming languages
and biology. These achievements form the basis of his lasting scientific leadership
and his wide impact.
...
Luca is always asking "what is new", and is always looking to
the future. Therefore, we have asked authors to produce short pieces that would
indicate where they are today and where they are going. Some of the resulting
pieces are short scientific papers, or abridged versions of longer papers; others are
less technical, with thoughts on the past and ideas for the future. We hope that
they will all interest Luca.
Hopefully the videos will be posted soon.
Philip Wadler posts his exchange with William Howard on history of the CurryHoward correspondence. Howard on CurryHoward.
Type soundness and freedom for Mezzo,
Thibaut Balabonski, François Pottier, and Jonathan Protzenko,
2014
Full paper
Presentation slides
The programming language Mezzo is equipped with a rich type system that controls aliasing and access to mutable memory. We incorporate sharedmemory concurrency into Mezzo and present a modular formalization of its core type system, in the form of a concurrent λcalculus, which we extend with references and locks. We prove that welltyped programs do not go wrong and are datarace free. Our definitions and proofs are machinechecked.
The Mezzo programming language has been mentioned recently on LtU. The article above is however not so much about the practice of Mezzo or justification of its design choices (for this, see Programming with Permissions in Mezzo, François Pottier and Jonathan Protzenko, 2013), but a presentation of its soundness proof.
I think this paper is complementary to more practiceoriented ones, and remarkable for at least two reasons:
 It is remarkably simple, for a complete soundness proof (and racefreeness) of a programming language with higherorder functions, mutable states, strong update, linear types, and dynamic borrowing. This is one more confirmation of the simplifying effect of mechanized theorem proving.
 It is the first soundness proof of a programming language that is strongly inspired by Separation Logic. (Concurrent) Separation Logic has been a revolution in the field of programming logics, but it had until now not be part of a fullfledged language design, and this example is bound to be followed by many others. I expect the structure of this proof to be reused many times in the future.

Recent comments
3 hours 44 min ago
6 hours 10 min ago
2 days 4 hours ago
2 days 5 hours ago
2 days 7 hours ago
2 days 14 hours ago
2 days 18 hours ago
2 days 22 hours ago
2 days 22 hours ago
2 days 22 hours ago