Universal Type System

Hi Folks

I was just having a discussion with a friend and this question might not seem logical but - just like any code structure can be represented by closures and any control flow can be modelled by continuations, is there any type system in which all type systems can be represented ? Does this concept even make sense ?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Not yet...

is there any type system in which all type systems can be represented?

In the context you ask the question, the answer is, no, there is no widely-accepted "grand unified theory" for all type systems generated by some simple, uniform, computational mechanism.

(Yes, I have hedged this carefully ;-) )

OK, I'll Bite

Why aren't Uniform Pure Type Systems precisely that?

What do you mean by "is"? ;-)

Why aren't Uniform Pure Type Systems precisely that?

Note my saying "widely-accepted": to wit, the page you reference doesn't even have working links and a quick Google search doesn't provide much more reference for that term.

A second proviso I would add is that there may be "type systems" of interest that don't fit that (or some other) candidate framework.

Beyond that, I'll reserve my personal opinion about your candidate until I can find some more details on the work you referred to.

If you want to make the case, go ahead. ;-)

No, I have a real answer (re universal types)

I think the poster is asking a question that can be usefully answered this way:

There are many universal data types. Perhaps the simplest is an idealized "cons pair" defined this way:

1. "nil" is a cons pair
2. If A and B are cons pairs (perhaps identical) then (A . B) is a cons pair.

A notable extension to this would admit mutation -- a way to modify a cons pair by changing one of the two "parts". I won't try to define that axiomatically off the top of my head.

It's not hard to see that you could make up rules to represent other kinds of data as cons pairs. You can encode numbers, lists (of course), trees (of course), type tags, procedures, etc. You can encode anything.

Moreover (I claim) that if you have a realistic model of how computer memory performs (how long reads and writes take) then, formally, a cons-pair-only representation of data is just as big-O time efficient and as space efficient as anything else you can think of.

Cons-pairs are interesting because they are so simple. I'm not sure what "simplest" means in the context of a universal data structure but I'm sure cons-pairs are pretty hard to beat for simplicity.

You could make up other data structures that are also universal. Cons pairs are simple in that they only have 2 parts so, for example, you could use arrays:

1. nil is an array
2. for any A1, A2, ... An; [A1, A2, ... An] is an array

Those are interesting too. They give you a naive performance model that is often more accurate "locally" (in small computations) but that is obviously less accurate in big-O space.

-t

Incompleteness

Type system design usually involves keeping the delicate balance between expressiveness and decidability. It is impossible to have a system that is expressive enough to subsume all possible others while still staying decidable. Basically, a variation of Goedel's incompleteness result.

How about a type system prototyping system...

is there any type system in which all type systems can be represented?

Yes, any programming language sufficient to implement a type system in, because we would be implementing the meta-level of all type systems: the type system axioms.

. . .

Preferrably, use a language like LUA or LISP/SCHEME that allows you do do what Thomas has mentioned. With that, start with the axiomatic functionality of all type systems:

  • Type Inference
  • Type Checking
  • Type Coercion

Then for each, bootstrap a function within the language you are using:

  • ValidTypesOf(D) - given structured data D, returns a set of all plausible types for D based on D's structure and state. This function would be the type-inference function.
  • IsOfType(D, T) - type-checks D for compatability with T; returns a truth value. The type-checking function.
  • AsType(D, T) - returns coerced data D to type T, where T is a type within the set returned by ValidTypesOf(). This is the type coercion function.

I suppose we are saying that all valid types(T) of any data(D) is a function of D's internal data and structure.

Types simply become value patterns and structural patterns found in data, with the type of data being inferred from the data's structure and/or state. In other words, types are "structured-data patterns".

. . .

Need complete determinism? Just use a tag within each data structure that stores the default, intentional type of given structured data.

Once More, With Feeling

Please have a look at Kanren and one of its examples, an implementation of the Hindley-Milner type inference relation (so it can serve either as a type inferencer or type checker).

See also TinkerType: A Language for Playing with Formal Systems. At one point the code for TinkerType was available, but I'll be darned if I can find it now.

I would also look at Ott: Effective Tool Support for the Working Semanticist.

Finally, any of the good theorem-proving-eh-functional-logical-programming-languages will let you develop extremely powerful type systems and play with them. I like Coq; other folks seem to like Twelf a lot. I also tend to think that Pure Type Systems in Rewriting Logic gets overlooked, for reasons that I can't fathom—I'd be pursuing it personally if Coq didn't have the Coq'Art book and I weren't such a newbie to the entire domain.

Interesting

I forgot that logic programming allows one to reuse bidirectional relations, which one could then define between data's type and data' value and structure, resulting in all the functionality of a type system for less coding. Kanren using the Hindley-Milner relation makes a point along those lines.

Tinkertype availablility

I'm looking for a tool to help me with this stuff (both doing and learning-how-to-do), so I did some googling. Source code is linked on Pierce's bibliography under the citation for the paper. Looks completely and totally unmaintained....

Good Enough

What maintenance would you recommend for the tool that was used to construct the various typecheckers found in TAPL, and probably to check the work in ATTAPL?

Well ...

... for someone just picking it up: the only documentation aside from the journal papers is a 631-byte readme that links to a dead website; there are no example type systems at all (they are purported to be on said dead site), nor documentation of the syntax of inputs. It has no obvious license, which is not important for self-education and tinkering, but pretty important if you wanted to redistribute an output type system as part of your own stuff.

It's probably a good tool for the experienced practitioner, particularly those who wrote it; for me (someone who's new to the concepts looking to learn, eventually to build systems for my own use), it will require some code spelunking if I decide to take it up.

The Undead website

I was able to get most of the pieces from archive.org's latest copy of that (un)dead website.

Working on it...

Just a little shameless self-promotion here: I'm currently working on a different approach to the same kind of problem, i.e. experimenting with type systems in an existing language. My tool, JStify, will be a toolbox designed to play with type systems (and other kinds of code-writing policy enforcement) for JavaScript 2, written in OCaml.

Out-of-the-box, it will come with nothing but the ability to parse JavaScript 2, a general-purpose annotations mechanism (called "marginalia") and name resolution. I hope that we will be able to progressively add more-and-more refined type-checks, starting with JavaScript 2's static type system.

So, while this tool is not a universal type checker -- indeed, it allows writing pretty stupid, unsound analysis -- it is meant as a tool for experimenting with type systems, testing them before actually proving then.

Conclusion: boostrapped type system in turing complete language

I was going to say this before...before I realized that this encompasses everything!

Assumption: all data processed by the type system is stored via a single universal data structure, whether that would be lists, hash tables, or bits.

  • Because the language is turing-complete, the type system can be made turing complete.
  • A dependant type system is analogous to coding functions that return constructed data, where the programmer can completely design the parameters to the function as parameters to a dependant type.
  • Any type system concievable

By bootstrapping a type system, we completely expose its semantics for definition, thereby making this a univeral type system--a language in which all type systems can be described/implemented.

If Coq et. al. proof assistants are turing complete, then they obviously fit this conclusion.

Here's your grand unified theory, AFAIK.

Turning complete type systems

Note that Turing-complete type systems (while good and useful) don't guarantee the programmer the ability to extend the language's type system naturally with new features or constructs.

While they enable a programmer to write a typechecker for a more powerful type system, and run it at compile-time in the source language, you would need the source compiler to expose some meta-representation of the program at compile-time, with some sort of compositional semantics, in order to actually extend the base type system usefully. This can be done, but isn't automatic. For example, in C++, templates make the type system Turning-complete, but the language lacks the ability to, e.g., define templates taking template abstractions as parameters, so exposing certain abstractions requires the user to manually create metadata containers to describe types, rather than just using them directly.

And, even if the compiler did expose a meta-representation of the language to its Turing-complete typechecker, naive introspection of that representation would break various theoretical properties of the unextended language -- such as subsumption and parametricity.

This isn't an argument against Turing complete type systems, rather just a claim that they aren't the total solution for type system extension.

My idea is not language extension...

The idea is that any type system can be implemented in any turing complete language simply as a library of coded routines--that is what I mean by "bootstrapped". Sorry for the ambiguity. I have not made an attempt to hide a type system in the language, just simply to implement one, regardless of what language uses it. The point is that this will satisfy the criteria of being "universal" in the sense that all type systems can be implemented this way. My point is simply a no-brainer, I suppose, but it is a starting point for realizing an ideal.

Universal Type System

The set of typing rules in a type system is equivalent to the set of axioms in a logic. Therefore, there is a direct translation of Godel's incompleteness theorem, saying: Any sufficiently powerful type system is either unsound (all types are equivalent), or incomplete (there exist programs whose correctness can't be proven or disproven within the type system).

So, we can't hope to construct one sound type system that contains all other type systems.

However, we can certainly construct a type system that can represent a great number of existing popular type systems. Much of the practical work in type theory consists of showing that one type system contains another in this way, and this can be directly applied in representing types from different programming languages within a single type system.

So, one can make great progress in unifying type systems, though it's not possible to "finish" the task.

Fortunately, incompleteness generally doesn't manifest itself when typechecking programs a person would ordinary write. Incompleteness arises from tricky arguments involving infinities and non-inductive recursion, which require the introduction of new axioms to prove or refute.

The logic analogy shows that this shouldn't be surprising: 2500 years passed from the time philisophers began stating recognizable mathematical axioms and proving theorems, to the time mathematicians realized that all such axiom systems were incomplete.

The halting problem for types

The set of typing rules in a type system is equivalent to the set of axioms in a logic. Therefore, there is a direct translation of Godel's incompleteness theorem, saying: Any sufficiently powerful type system is either unsound (all types are equivalent), or incomplete (there exist programs whose correctness can't be proven or disproven within the type system).

Would the halting problem be another way to look at it? A sufficiently complicated type system would be Turing complete (i.e., a type becomes a chunk of code which says whether or not a value belongs to the type); at that point, you can have types which do not halt.

So, one can make great progress in unifying type systems, though it's not possible to "finish" the task.

Job security. :-)

Qi typing

Not sure how it fits into the discussion of Universal Type Systems, but Qi typing is Turing complete. Most language designers think that's not a good thing, but the Qi group contends it makes for the most powerful type system possible.

Incompleteness vs The Halting Problem

That's a great question. Does anybody know the answer?

Godel showed that given any axiom system as powerful as ZF, there exist theorems which can neither be proven nor refuted.

Turning showed that given any computer as powerful as a Turing Machine, there exist programs whose termination can neither be proven nor refuted.

I think the answer is: There exist type systems as powerful as ZF which are not Turing-complete, so Godel's theorem gives us insight into systems where Turing's theorem is inapplicable. We can't construct a model of a Turing machine in ZF, because the axiom of comprehension is insufficiently powerful to represent general recursion. But on a Turing machine, we can write a ZF axiom prover that terminates for any axiom that is provable or refutable -- but it might not halt.

ZF is too big

Does anybody know the answer?

We talked about this quite a bit in this thread.

ZF is more powerful than a Turing-complete (i.e. effective/computable) system, since the set of all total functions from N to itself is a valid entity in ZF, thanks to the Powerset Axiom plus the Axiom of Infinity, whereas only countably many such functions can be represented computably.

Since a Turing-complete system encodes arithmetic, it is subject to Gödel in the direct sense, but we can also take Gödel as a more general statement about systems with enough power to make certain kinds of self-reference, and the Halting Problem can be seen as an example of this larger principle as well.

ZF Turing Complete?

I don't think it's fair to say that ZF is more powerful than a Turing complete system simply because it has axioms that gaurantee the existence of objects that _it_ considers large sets. We could consistently extend it with an axiom gauranteeing purple unicorns after all ;). Regardless of what it claims (under some interpretations), ZF is a first order theory with a countable model. However...

Tim wrote:

We can't construct a model of a Turing machine in ZF, because the axiom of comprehension is insufficiently powerful to represent general recursion.

Is this right? What is a model of a Turing machine here [*]? It seems to me that the transition rules for Turing machines can be encoded in simple ZF arithmetic. (I haven't worked through the details - this is just intuition.) Then we can encode the question of whether the machine terminates as "does there exist an N for which there is a sequence of N valid transitions ending with [program]?"

You can then demonstrate the incompleteness of ZF by writing Tim's Turing program that enumerates ZF proofs looking for a proof of termination of some given program. If ZF were complete then we would always either find a proof of "[program] terminates" or "[program] doesn't terminate" and so we could solve the halting problem. But we can't, so it isn't.

[*] It's true that there is no way to model only proper Turing machines in ZF. The ability to assert the termination of a function that decrements its argument until it reaches zero is enough to pin down the naturals, so we're necessarily going to have to admit some non-standard models in a first order theory. One might hope, though, to produce an encoding for which every and only terminating programs can be proved to terminate (with some non-terminating programs that cannot be proved to not terminate). This I suspect can be done.

It's hard out there for a Computationalist

I don't think it's fair to say that ZF is more powerful than a Turing complete system

Let me provide a different "proof" that ZF is beyond Turing-completeness. In ZF, you can iterate completely over an infinity (at the very least N); in a Turing-complete system any such iteration is non-terminating, and can at best be finitely approximated.

Explicit recursion is only necessary when you need to finitely represent infinities: if you already have manifest infinities, you are already more powerful than a computational system, even if you explicitly eliminate all recursion. (Which ZF doesn't really do, but that is another story...)

ZF is a first order theory with a countable model.

For some interpretations of both "first order" and "model", this is patently false. What are you thinking of here? (I have some idea, but since I wan't to stick to computational interpretations rather than pure-mathematical ones, best to get them in the open. )

IANAL, but...

(that's logician)

In ZF, you can iterate completely over an infinity (at the very least N); in a Turing-complete system any such iteration is non-terminating, and can at best be finitely approximated.

I have the same objection to this that I had before. I don't think it's fair to attribute computational power to a logic on the basis of what the logic is supposed to be asserting under some interpretation.

The Wikipedia page on ZF links to what I mean by first order. In the first order model I had in mind, each "ZF set" under discussion is interpretted as an element of some set, and containment is a relation on this set, etc. But there is no requirement that there be any deeper structure to the model - the "ZF sets" don't themselves have to be interpretted as "real sets" at the meta-level nor does the containment relation need to correspond to containment at the meta level. It's certainly possible to insist on a stronger model, but in some sense I think that's cheating - you're not fairly assessing the power of the logical system. Consider a logical system with no inference rules and a single axiom "exists x. I(x)" that we interpret to mean "there exists an element x which is an infinite set." How powerful do you consider this system?

One way it would make sense to ask about the strength of ZF as a computation scheme is to ask what ZF can compute. For example we can define that a function f :: N -> N is computable in ZF if for every n there exists an m such that "f(n) = m" is provable. It's clear that in this sense a Turing machine is at least as powerful as ZF, since it could always just simulate ZF to define functions. Whether ZF is as powerful as a Turing machine here isn't super obvious to me, but I would again suspect it to be. I'd think that it should be possible in ZF to prove that Turing machines under the encoding of my previous post always produce a unique answer when they terminate, and thus define functions.

(In retrospect it would probably be easier to define computability in terms of relations on N instead of functions N -> N).

ZF as a type system

Thinking about it a little more I think the situation might be this:

- ZF is strong enough to prove that programs terminate on a particular input if they do. Consideration of this fact in light of the halting problem leads to a proof of incompleteness as above.

- But to consider ZF as a type system, we're really more interested in the programs that ZF can prove to terminate on all inputs. There is another incompleteness result to prove here - that there are programs that terminate on all inputs (and thus ZF can prove termination on any particular input) but for which ZF cannot prove termination on all inputs.

It seems like a way to prove this would be to construct T(P,n), a program that terminates if and only if program P (not requiring input) has not terminated on or before n steps. For a program that doesn't terminate, ZF should be able to prove that T(P,n) terminates for any particular n by working through the steps. But if you can prove T(P,n) terminates for all n, then you have proven P does not terminate. Choose P to be one of the programs that does not terminate but cannot be proven not to terminate, and I think you have your example.

But it does seem that all of this can be made to fall out of the Halting Problem result.

A little Wikipedia is a dangerous thing... ;-)

One way it would make sense to ask about the strength of ZF as a computation scheme is to ask what ZF can compute

Asking what ZF can compute (for a very general notion of compute) is essentially the same as asking what entities can be constructed from its axioms.

I think what you don't realize is that the thing you are calling ZF is not ZF: you seeem to be referring to a computable or finitary fragment of ZF, which is not the same thing as ZF. (For example, it necessarily would not be closed over the usual axioms).

This is the kind of situation where mixing pure math and computer science without being aware of these distinctions gets people in trouble.

Just wait until you cross paths with a big Wikipedia

Note: My background in this area is from a math POV, and I only skimmed the Wikipedia page to verify it would provide a reference that supported my view. In math circles, I don't think the assertion "ZF is a first order theory" is controversial, so I wanted to provide a reference.

Asking what ZF can compute (for a very general notion of compute) is essentially the same as asking what entities can be constructed from its axioms.

After I used the phrase "what ZF can compute" I proceeded to informally define what I meant by that. I'm not aware of the formal definition of what it means for a logic to compute anything.

I think what you don't realize is that the thing you are calling ZF is not ZF: you seeem to be referring to a computable or finitary fragment of ZF, which is not the same thing as ZF. (For example, it necessarily would not be closed over the usual axioms).

Hmm, I may be abusing terms like 'compute', but I'm somewhat confident that I'm using ZF in a manner consistent with its meaning to most mathematicians. In my world view, ZF is a formal system, not a structure that satisfies the axioms (and talking about it being 'closed over axioms' doesn't really make sense).

Putting terminology aside, at the start this discussion concerned how powerful ZF would be as a type system. From that perspective, ZF asserting the existence of gigantic ordinals that support transfinite induction (ZFC*) is somewhat like it asserting the existence of God or purple unicorns, except to the extent those constructions help me figure out what my program will do [*]. For that reason, I think the approach to computability I was using is relevant to the conversation.

[*] I'm going blank coming up with an example where the use of infinite sets helps prove a theorem of Arithmetic that could not be proved without their use. Anyone have an example?

Goodstein's Theorem

Goodstein's theorem is independent of the axioms of Peano arithmetic but provable in ZFC, and the proof relies on Zorn's lemma on infinite cardinals.

One point you have alluded to but haven't made explicit is that ZFC, for all it's talk of infinite sets and axioms of choice has countable models. I think you elements of that model are just equivalence classes of predicates that denote unique sets (not in any kind of semantic sense that presupposes a universe of sets, but in the formal sense that P(x) /\ P(y) -> x=y is provable).

Interesting - thanks for the reference!

FYI I did mention that ZF has a countable model earlier in the thread :)

To you from failing hands we throw

In my world view, ZF is a formal system, not a structure that satisfies the axioms (and talking about it being 'closed over axioms' doesn't really make sense).

Perhaps we will have to agree to disagree, since for me "a structure that satisfies axioms" is pretty close to the definition of a formal system.

If you think ZF as a type system will help you understand computation, by all means see where it will take you.

Just know that a lot of work has been done coming up with other set theories and type systems exactly under the premise that ZF was too powerful for constructive and computational purposes.

I'm (dis) agreeable with that

In the terms I'm familiar with, the formal system is the axioms + inference rules, and there can be a number of structures that model it. Structure is another term of art here.

For the record, I don't think ZF is the ideal formal system for reasoning about programs, but I found the thread interesting so I jumped in :).

splay trees.. yeah, splay trees

Everyone is answering with stuff from "type theory". That seems slightly off the mark. He isn't asking for a universal theory of types. He's asking for a universal type.

Cons pairs are one (very good) universal data type and you can see that by thinking about combinators and splay trees and physical machine all at the same time.

Consider the set of all finite cons-pair trees over the starter-set {nil}.

Let us identify the head forms of all such trees, to some depth, with some set of primitive combinators. We're already well off on our way to computation because, in our definition of "head forms" ("what the tree looks like near the root") we've already introduced variables. The name of a head form might look like "(X . (3 . *))" meaning a tree with the fixed value X as left child, and as the right child a subtree. That right-child subtree has a fixed 3 as left child, and any value at all as the right child. That "any value at all" is a variable name -- and there's yr lambda, right there.

Combinators in this system can arbitrarilly rewrite the head form placing the variable values wherever they like, but not "reaching inside" the subtrees bound to variables. A combinator can construct any fixed, finite number of new nodes during this rewrite (that is, the definition of the combinator must imply a fixed upper bound on the number of new nodes that will be constructed). The rearrangement of the head form can be computed by any function whose specification guarantees termination in at most some fixed number of steps. The classic combinators are an example "(((S . *x) . *y) . *z))) -> ((*x . *z) . (*y . *z))"

A natural cost model falls out of this: The reduction of any head form by evaluation of its combinator rule costs at least J steps where J is the size of the head form pattern naming the combinator. The rearrangement of the head form might raise or lower the depth of variable elements, copying them any number of times. The rule can construct any number of elements. In the example of the classic S combinator, above, the subtree *z is copied twice and the subtrees *x and *y each change depth by 1. If we take the size of the S combinator's name ("(((S . *x) . *y) . *z)") to be 7, then with two changes of depth and and one copy, the S combinator has a cost of 10 steps.

This cost model is physically realistic. All physical representations of data are essentially tree-like (e.g., CPU registers at the root, then caches, the main memory, etc.) The cost of resolving the address of some piece of data (a "variable name" in our combinator system) is, indeed, log-n in the number of values addressable by a name of similar length. Raising and lowering the depth of subtrees closely corresponds to moving data between the layers of a memory hierarchy and, as our cost model predicts, the cost of bringing in data from the nth order of magnitude of addressability is, again, log n.

Type theory is about something related but different. If you picked by cons-pair model of computation (combinators + cost) as foundational then you could say that type theory focusses on the combinators part and tries to say as much as can be said about the divergence of computations based on the finitely computable (hence "static") properties of a tree.

But, "type" in the sense of "data structure" -- which is again I think the original question (keyword, "represented"): trees are a good answer.

-t

Perhaps tangential...

I don't think I agree with you about the original question. In fact, I think Tim Sweeney hits the nail right on the head in his comment above, and I really have nothing to add to that.

However, regarding the system you're presenting here: if I squint a little, this looks a lot like the type system Chris Diggins proposed for his Cat language. I find that interesting...

Now definitely tangential ...

The relationship with Cat makes sense because the Cat type-system can be easily understood in terms of the composition of functions on tail-polymorphic (i.e. row-polymorphic) lists.

Me too post

I'm with Tim Sweeney as well since the OP said "type system", a rather unambiguous term imho, and in this context "to be represented" could be read as "to be contained".

Type systems are computations.

Type systems are computations; but there is no universal computation that can prove the correctness of all other computations, and therefore there is no universal type system (I am not a mathematician, so I apologize if my comment is not precise enough).

Still, I think that Turing-complete type systems are more useful than decidable type systems, for the sole purpose that there is a number of bugs that they can catch that decidable type systems can not.

Now you have my interest...

Being a novice in formal type systems theory, am I on the right track in guessing that these turing complete type systems would catch bugs like deadlocks in concurrency, or infinite loops, or even circular-linked-list iteration, or stack-overflows, specifically, due to the halting problem preventing a decideable type system from completeting its computation? Is this right?

. . .

Type systems are computations

I see what you mean--my comment earlier, by definition, reflected that thought--type systems are implemented in programming languages as computations, therefore they are, essentially, computations.

Not quite

No, the halting problem says that solving some of those problems is at best semi-decidable and therefore no decidable type system can catch them all while not eliminating valid programs. If the type system's decidable then given enough resources (and this'll be a finite amount) the checking problem will complete.

Intersection types

A full answer to the question should explain intersection types, whose inhabitation relation is semidecidable. While some attempt to marry these type systems with regular types has been carried out by folks interested in control flow analysis, I don't think we can be said to know much about type systems that generalise both HM and intersection types.

Thank you all!

Thanks everybody. I concur with Andreas Rossberg and Tim Sweeny's arguments.

A type system will need to balance expressivity and decidability. Of course, it may forgo either of them, but can't have both!