Self-Representation in Girard’s System U

Self-Representation in Girard’s System U, by Matt Brown and Jens Palsberg:

In 1991, Pfenning and Lee studied whether System F could support a typed self-interpreter. They concluded that typed self-representation for System F “seems to be impossible”, but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kind-polymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress?

We show that it is not and present a typed self-representation for Girard’s System U, the first for a λ-calculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to “tie the knot” – they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our self-representation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed self-applicable operations: a self-interpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuation-passing-style (CPS) transformation – the first typed self-applicable CPS transformation. Our techniques could have applications from verifiably type-preserving metaprograms, to growable typed languages, to more efficient self-interpreters.

Typed self-representation has come up here on LtU in the past. I believe the best self-interpreter available prior to this work was a variant of Barry Jay's SF-calculus, covered in the paper Typed Self-Interpretation by Pattern Matching (and more fully developed in Structural Types for the Factorisation Calculus). These covered statically typed self-interpreters without resorting to undecidable type:type rules.

However, being combinator calculi, they're not very similar to most of our programming languages, and so self-interpretation was still an active problem. Enter Girard's System U, which features a more familiar type system with only kind * and kind-polymorphic types. However, System U is not strongly normalizing and is inconsistent as a logic. Whether self-interpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Possibilities

> Whether self-interpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem.

Isn't the answer trivially "no," by Curry-Howard and Gödel's incompleteness theorems?

Anyway, the big deal here is:

"Our techniques could have applications... to growable typed languages..."

Curry-Howard tells us types are logical propositions. Mathematicians from Bayes to Laplace to Keynes to Cox to Jaynes have told us probability is an extension of logic or (as I prefer) logical truth/falsehood are limiting cases of probability. Alternatively, probability/logic are measures of information, in the algorithmic information theory sense. So a "growable typed language" seems it would have obvious benefits to machine learning and probabilistic programming generally. In other words, it's not inherently a big deal not to be strongly normalizing or have decidable type checking by default—that just means the system doesn't yet have enough information. But when the system acquires new information, it can synthesize new types and their relations, "growing" the typed language.

It's hard to overstate how significant this is.

Isn't the answer trivially

Isn't the answer trivially "no," by Curry-Howard and Gödel's incompleteness theorems?

Here's one way to avoid prohibition of Godel's incompleteness theorem. If you look at the way Godel's theorem is usually informally stated, it's that if a sufficiently powerful logic can prove its own consistency, then it's inconsistent. But that's technically wrong. It's not enough that the logic be as powerful as arithmetic -- you also have to have that the consistency proof resides or is within the grasp of that strong fragment of arithmetic. If the arithmetic fragment "understands" the workings of the consistency proof, then you're in trouble with Godel.

Similarly with self evaluation, what you can do is limit the ways that you're allowed to use the self evaluator. It's the same idea as above under CH. I've sketched out this approach in an older LtU post.

Whether self-interpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem.

Hmm. I would consider the system I've sketched out above to answer this question in the affirmative. It probably isn't in the form the answer the authors were looking for, but I think it satisfies all of the stated conditions.

For me, the goal is not to get around the deep limitations on deductive systems demonstrated by Godel. I think these techniques are mostly interesting because we want to be able to use our theorem provers to mostly check and run other implementations of themselves.

It's hard to overstate how significant this is.

And yet I think you may have managed it! :)

Similarly with self

Similarly with self evaluation, what you can do is limit the ways that you're allowed to use the self evaluator. It's the same idea as above under CH. I've sketched out this approach in an older LtU post.

Indeed, I linked to a comment in your thread. However, I must remain skeptical until I see an actual implementation!

Another possibility for achieving consistency and a form of completeness is an approach like Gentzen's consistency proof of arithmetic.

Implementation

You probably need soundness proof for the logic, too. I'm not going to spend any more time on this as a toy problem, but I have built this trick into my language's logic for which I'll hopefully post something soon.

Eval is a message (not a procedure)

Eval is a message (not a procedure) using the following interface definition:

Interface Expression◅aType▻ with eval[_:Environment]↦ aType

PS. The above cannot be expressed in Java :-(

If you look at the way

If you look at the way Godel's theorem is usually informally stated, it's that if a sufficiently powerful logic can prove its own consistency, then it's inconsistent. But that's technically wrong. It's not enough that the logic be as powerful as arithmetic -- you also have to have that the consistency proof resides or is within the grasp of that strong fragment of arithmetic. If the arithmetic fragment "understands" the workings of the consistency proof, then you're in trouble with Godel.

For Gödel's Second Theorem the logic has to internallize the reasoning about whether the proposition is provable. For the First Theorem it suffices that the meta-mathematician observe that reasoning. But the reasoning involved is simple.

Numbers: irrelevant to inferential incompleteness

Numbers are basically irrelevant to inferential incompleteness.

The key aspects of the Church/Turing proof of inferential incompleteness are:
* that theory theory be closed, i.e., theorems computationally enumerable
* that the theory be computationally undecidable, i.e., decision problem for theoremhood

Self-interpretation for Agda

Typed Syntactic Meta-programming (ICFP'13) uses exactly the workaround "add eval as a builtin" (which I think is what you describe). They also sketch how to apply (variants of) Gödel's proof to show this is (probably) necessary, though nobody has filled in the details (as of today).

The problem with "eval as a builtin" is that you often need to write arbitrary folds on terms, not just evaluation — you'd often want other semantics, including term transformations and general metaprogramming. However, that paper shows some evidence that powerful metaprogramming is indeed possible in that system.

Not "add eval as a builtin"

The approach I described in the linked post is actually pretty flexible and is definitely not "add eval as a builtin."

The situation I describe in the linked post is one where the logic is able to construct a model for any of its terms, but not able to quantify over all of them in one proposition. So to call the total eval, you need a proof that the term X you're evaluating isn't divergent, and this proof require N levels of nested universe. If you tried to eval (eval [X]) where the brackets is another level of quotation/encoding, you'd need N+1 levels of universe to prove that correct. However, there are actually arbitrarily many levels (the logic knows there are N levels and can prove N>M for any M, but can't prove forall M. N>M). Thus every provably total term can be encoded and evaluated with eval, but terms that recurse on eval to arbitrary depth cannot be proven to terminate.

As I mentioned, I'm incorporating this into my logic, which is, IMHO, cool for other reasons. I only decided to bolt this idea onto it because it looked so easy to do and it lets my logic serve as its own meta-theory. I hopefully will have finished the write-up and soundness proof in the next month or two. I'd love feedback :).

Thanks for the link, though. I'll definitely check out that paper.

Edit: Hmmmm... after skimming this paper you linked to, I don't think this is that similar. It looks to me like they are just adding a meta-programming facility on top of Agda, but that facility just creates ordinary Agda functions, not functions that themselves are capable of this new meta-programming. So it looks like a meta-programming layer, but not a meta-circular system. (Please correct me if I'm wrong!)

Looks nice

but not able to quantify over all of them in one proposition.

I somewhat agree. Somewhere, it wouldn't surprise me that the manner in which people thought about quantification turns out to be wrong.

Isn't the answer trivially

Isn't the answer trivially "no," by Curry-Howard and Gödel's incompleteness theorems?

No, because the type system could be inconsistent as a logic, like System U is, just in a way that's strongly normalizing.

Or there could perhaps be an interesting type system no more powerful than Presburger arithmetic, thus achieving consistency and completeness.

We may be missing a conjunct

> No, because the type system could be inconsistent as a logic, like System U is, just in a way that's strongly normalizing.

Wouldn't the type system's being inconsistent as a logic imply type (proof) checking would diverge, i.e. not be decidable? They could allow that, taming it the way C++ does (--template-instantiation-depth) or not taming the way Scala doesn't (compiler crashes). But I'd have thought they'd have called that out, or rather anyone claiming to have come up with self-representation in a strongly normalizing language with decidable type-checking would.

Looks like the opposite to me

I estimate that for an inconsistent logic I can trivially write a terminating strongly normalizing prover which just answers "true," or "false" depending on your mood.

I don't have the feeling this paper is being discussed enough. I don't get the ramifications of what they did. I am not even sure what a self-interpreter for a logic entails.

Types are not logical predicates.

Types are not logical predicates.

For example, first=order logic has logical predicates but cannot express the categorical Peano/Dedekind induction axiom for the natural numbers.

Is that a complete argument?

What does the ability to express something in first-order logic have to do with whether something is a type or not.

The problem isn't that Hewitt makes wild claims

The problem is that no-one on LtU is informed enough to easily debunk the claims made.

Just a thing I noticed.

The problem is that no-one

The problem is that no-one on LtU is informed enough to easily debunk the claims made.

Not so. One does get the impression he wants to think it's the case, and wants others to think it's the case. But the problem is more that his writings usually don't pin anything down clearly enough to provide a target for specific criticism. You can make bold assertions, rub people's noses in your Credentials, and invoke others with even more impressive credentials who are safely dead and so unlikely to object to being invoked (best of all is if you can quote something an impressive authority said that's ambiguous, obscure, or both), make gestures in some general direction, and leave experts with nothing to criticize except your lack of clarity, which you can often deflect simply by accusing them of having not read your work carefully enough. From time to time some point of his has provided a clear target, and then he doesn't fare well, as with that incident a while back where we traced his claim that actors is more powerful than Turing machines to a Quote From Authority, and pointed out that the passage he was quoting doesn't mean what he was claiming it meant.

I can't help feeling he's winning by succeeding in getting us to talk about him, though. A troll wants to get a reaction, and whether he means to be or not his behavior is troll-ish. (There should be a name for someone who trolls in practice regardless of whether they mean to.) Which is part of why I now do my best never to respond directly to anything he says, no matter how outrageous or inflamatory.

Actors are more powerful than nondeterministic Turing Machines

John,

You have failed to hit your "clear target" :-(
For decades, there have been published articles by numerous authors
that Actors are more powerful than nondeterministic Turing Machines
because Actors implement unbounded nondeterminism.

By now, one might think that everyone had caught on.

Regards,
Carl

PS. You cannot win substantive arguments by simply calling me bad names :-(

Poe's law

You are a troll iff you act as a troll. It doesn't matter what motivates your behavior: only that it has trollish effect.

If I had to guess at his motivation, I think you could describe Hewitt as an "honest troll", in the sense that he seems motivated solely by pushing his logical framework. The unfortunate way in which he goes about this seems natural, not affected; I'm sure any offence, or derailing, or long and pointless threads, are unintended consequences...
.

You can do this too in ActorScript!

And it will be exponentially faster!

(Sorry for that. I've been too long on LtU. Gonna take a walk and grab a coffee.)

The following is not a comment


This preceding was not a comment


Does ActorScript already have a typed self-interpreter?

Does ActorScript already have a typed self-interpreter?

I couldn't be sure whether ActorScript satisfies all of the conditions.

I haven't found the type

I haven't found the type semantics in you paper. There seems to be an operational semantics there, but no type analysis.

Type semantics of ActorScript are in the meta system

The type semantics of ActorScript are in the meta system.

Yah. That's what the other

Yah. That's what the other guy said too.

Meta system is on pages 51-92 of HAL ActorScript article

The meta system is on pages 51-92 of
ActorScript article on HAL