type theory about programming language?

I have a question for which I expect a good answer from Matt M, Andreas Rossberg, or Jules Jacobs, should any of these folks feel inclined to reply, because they clearly know a lot about type systems.

Is type theory math? How is discussing type theory also discussion of programming language? When analyzing types in a specific PL, I can see type theory informing analysis of that PL. But when talking about type systems in the abstract, apart from a PL, it just sounds like philosophy of algebra to me. I can't find the PL angle unless the idea is that all programming languages should be about math. What I'm trying to do is put type theory discussion in context. What is it good for? (I hope asking a simple brass-tacks question is not rude.)

I don't mind getting beat up as long as it's educational. :-) But I'm trying to express curiosity rather than a challenging attitude. Edit: I'm going from my personal reaction to type theory discussions, rather than anything I read. I hope to learn something, rather than push my impression that type theory is just math. So a reasoned contradiction would be interesting.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Yes

Type theory is math, but it is sometimes also applicable to programming languages.

*ducks*

You took my slur?

I reserve the term "mathematical philosophy" for category theory.

Type systems are there to detect bugs, optimize programs, and help with incremental compilation.

I have to admit I don't study more elaborate views.

Some of this

Some of this question of whether types are math came out in the discussion of my post Where do types come from?; you might five some parts of that of interest.

A few thoughts: * The

A few thoughts:

* The Curry-Howard Correspondence shows that programming languages and formal deduction systems (ie. mathematical logic) are the same thing. This doesn't help us with large, complex, informally-defined languages like C++, since the formal logic they correspond to would likely be inconsistent and unusable. However, for simple languages like Lambda Calculus and its descendents we gain a new perspective on our code, eg. we can formally prove properties about it. Type Theory is a group of related formal systems which correspond to variations of Lambda Calculus.
* Type Theory is one way to formulate mathematical foundations, ie. it can be used as a "machine code" for Maths. The most widely-used foundation is Set Theory, but the programming language it corresponds to is incomputable, which makes it difficult to check proofs and compose them together. Type Theory is easy to compute, since it's just a form of Lambda Calculus. It is known as a "constructive" theory, since we can always construct (compute) the objects being studied.
* Type Theory turns out to be a very rich way of formulating maths (see Homotopy Type Theory, for example). The insights gained by doing maths this way can be translated directly into new programs and programming languages. For example, Dan Licata has used Type Theory to write a Darcs-like version control system http://www.reddit.com/r/haskell/comments/1vlc9s/darcs_as_a_higher_inductive_type/

I think what I'm trying to say is that it's not so much a case of "the idea is that all programming languages should be about math", it's more like 'programming languages *are* math, whether we like it or not, so how can we use that to our advantage?'.

programming languages are math

programming languages *are* math, whether we like it or not, so how can we use that to our advantage?

I agree with this wholeheartedly. Which I suppose is interesting, given my distrust of typing. It's long seemed to me that the skills to be really good at constructing mathematical proofs, and the skills to be really good at programming, are the same. There are interesting differences in the flaws of proofs versus programs, though. The consquences of an error in a proof are, at least sometimes, more... analog. And whereas the dominant means of checking programs is to run them, the dominant means of checking proofs is to desk-check them (and I won't even start to explore all the facets of that, or I'd still be writing this next week).

If PL is math then I guess I

If PL is math then I guess I am math also. Note that neither of these statements make any sense. Math is just a tool, not an end in itself.

If PL is math then I guess I

If PL is math then I guess I am math also. Note that neither of these statements make any sense. Math is just a tool, not an end in itself.

It makes perfect sense if you're Max Tegmark!

Maths is absolute.

Whereas the laws of physics depend on the nature if the universe, the laws of mathematics are invariant. So maths is not just a tool, the philosophical nature of maths is far more profound.

As such programming languages in the abstract have nothing to do with the hardware that executes them, but are infact a branch of mathematics.

We don't live in math,

We don't live in math, instead we use math to describe the universe we live in. As an artificial construction, math may or may not describe that universe, which has many concepts that are not easily explained using math (like say computation..., unless you math has code).

Math is just a tool, not an

Math is just a tool, not an end in itself.

That depends on who you ask a Pure Mathematician or an Applied Mathematician ;)

Good Point

I like this, but would point out the applied mathematician is always wrong. That is any mathematical model of reality is by definition a model and innacurate. The answers it gives may be close to reality, but are always wrong. The pure mathematician of course can be correct (by proof or by definition).

any mathematical model of

any mathematical model of reality is by definition a model and innacurate

Interesting that you think so. Clearly any mathematical model of reality is by definition a model. I'm not sure why you'd figure a mathematical model is necessarily inaccurate. Or did you mean, by definition inaccurate?

Reality has veto power over

Reality has veto power over all of our models. If the two disagree, it is the model which is wrong. Since models are constructed by induction, they are necessarily incomplete; if all light emitted after 2015 is blue, reality isn't wrong: Maxwell's equations are.

Theoretically perfect models only exist in theory ;)

Mathematical entities, on the other hand, can be modelled exactly using deduction, since we get to define them.

Models are innacurate

Quoting Einstein: "As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."

Any model can always be refined to be more accurate by incorporating more detail from reality. To accurately model reality would require knowing the unmeasurable quantum states of the whole universe. So I guess I mean by definition.

We cannot even really count. The assumption that there are 3 sheep is based on an abstract concept of sheepness that does not exist in reality. Each sheep is a unique spacetime event and counting them ignores their inherant uniqueness.

To accurately model reality

To accurately model reality would require knowing the unmeasurable quantum states of the whole universe.

I can accurately model a specific x86 processor, but that doesn't tell me the precise memory contents of my desktop as I post this message to LtU. I don't see why the latter would make my model less accurate.

Not Accurate

Cosmic rays, thermal runaway, latchup, signal timings, maximum clockrate, power consumption, electomigration all say your model is innacurate.

I think you're missing the

I think you're missing the thrust of the argument: just because we don't know the precise values of a model's input parameters, doesn't make the model inaccurate.

Input parameters

We're now about a step and a half away from Bell's Inequality.

You quote Einstein on

That quote addresses certainty, not accuracy. What one means by accurate seems the key to this. I'd think one might be uncertain about whether or not a model is accurate, ergo uncertain about whether it's inaccurate.

Uncertainty

You cannot be certain that an apparently accurate (note, not correct) model will still be accurate tomorrow. An example of this is component failure, resulting in an unpredicted rapid deviation in the accuracy of the model.

I would prefer the words accurate and inaccurate when referring to models and reality. I think only proofs (and definitions) can be "correct".

Other people may be less strict about these things, that's fine by me, and I have certainly been less strict about definitions when convenient.

You cannot be certain that

You cannot be certain that an apparently accurate (note, not correct) model will still be accurate tomorrow.

That seems a safe statement, supporting a claim that one can never be certain that a model is accurate. But afaics, nobody here is claiming that one can be certain a model is accurate. The claim in question is that all models are inaccurate, for which I see no support.

I would prefer the words accurate and inaccurate when referring to models and reality.

I don't mind if you use those words, but I wish I knew what you mean by them.

wrong about wrong

If a model did not abstract from reality it would not be a model, it would be the thing being modelled. The outputs a model gives are not "always wrong" unless you define "wrong" as "at all different from reality". But that is, of course, an overly strict definition of the equivalence relation between real phenomena and model outputs. The purpose and power of abstraction is to elide irrelevant details such that questions otherwise too hard or computationally intractable to answer can be answered. The key question with a model is "are its outputs sufficiently close to reality to be useful?" (where sufficiency depends on the problem at hand). If the answer to that question is "yes", then the outputs generated by the model are "right". An applied mathematician, like an engineer, is interested in "good enough" answers. A pure mathematician may be "right" all the time, but being right about something disconnected from reality is not particularly useful for real world applications.

Wrong can still be useful

I totally agree. But the answers can diverge drastically. For example radiation in space electronics can flip a bit, and you will get a completely wrong answer or a program crash. A highly accurate model today could be totally wrong tomorrow as in grune, which is a colour that looks green today, could suddenly be blue tomorrow. You won't know how badly wrong your model is until reality proves it wrong (this argument was already made by Chris above).

Not even wrong

I'm not really sure what point you're trying to make now. If SEUs or other bit flips are of concern in the system you're building, then they should be included in the model. If they're not relevant, then they can be abstracted out (note that SEUs are perfectly possible on earth as well as in space, they're just sufficiently rare that most people don't bother to model them).

As for "grune": the problem of induction is, of course, a problem. So what? A previously useful model only becomes wrong if (a) some new fact invalidating the abstraction in the model comes to light, and (b) the invalidated model can no longer be used. New information doesn't necessarily make a model "wrong", it may simply modify the contexts in which it can be used. Newtonian mechanics was not made absolutely "wrong" as a result of relativistic phenomena, and is still extremely useful for most problems in dynamics. It just happens to not be useful at relativistic speeds--it is the "wrong" model for those contexts.

Grune

The nature of grune is that Newtonian mechanics could stop working tonight at midnight. This was always going to happen, and is a fundamental property of the universe (a step function in quantum dynamics). The current model is wrong, for more than 50% of all time (assuming the universe has longer to go than has passed already).

The relativistic refinements to neutonian mechanics are not an example of grune, unless you are suggesting relativistic effects did not exist before Einstein postulated them.

There are more things in heaven and earth, Horatio...

Again, the problem of induction is a known and acknowledged problem. But it's also (in the case of something like grune) more of a theoretical or philosophical problem than anything of practical import. If Newtonian mechanics suddenly stops working, we'll have much bigger problems than worrying about modelling accuracy. Attempting to construct a model that can account for your grune condition (for all possible midnight transitions in physical laws) would result in a needlessly complicated model. Not to mention all of the other "possible futures" we might need to consider (the universe ends tomorrow, water suddenly becomes carbon dioxide, the sun spontaneously extinguishes, sheep become carnivorous, my father's ghost suddenly appearing to tell me that he was murdered by my uncle, etc.). Recall that the point of a model is to suppress irrelevant detail. Unless you can convince me that the grune transition is something I really need to consider as a possible future then it strikes me as an irrelevant detail that can be safely left out of whatever model I'm working with. The resulting model is not "wrong", it is simply only useful in contexts in which your theoretical grune transition has not occurred. Confronted with new information, we can construct a new model. Until then, it's not worth worrying about all of the possible but improbable futures.

And no, I'm not suggesting that relativistic effects did not exist before Einstein postulated them. Einstein would not have postulated anything if relativistic effects not explainable by the Newtonian model had not been observed. My Newtonian-Relativistic example was not intended as an example of "grune", but rather as an example of the much more common situation in which new data invalidates a model for use in certain contexts (without invalidating the model for all contexts).

I feel this mischaracterizes the problems of induction

I feel this mischaracterizes the problems of induction. I don't want to take this off-topic for LtU, so I'll just say that Hume's problems of induction were addressed by Solomonoff Induction, which depends heavily on computer science and so might be interest to those here. Goodman's "new riddles of induction" were addressed by Quine. The above argument doesn't really represent Goodman's grue/bleen problem faithfully, which wasn't about natural laws changing suddenly, but about how observations support arbitarily weird predicates like grue/bleen just as much as "natural predicates" like green/blue, in which case, how can we justify the use of green/blue over grue/bleen? Quine addressed this with "natural kinds", that represent actual properties of objects, and only natural kinds can appear in scientific propositions.

any mathematical model of

any mathematical model of reality is by definition a model and innacurate

Well, we could be accurate but imprecise. (cf. accuracy and precision, fermi estimation.)

What does it mean to be 'right'? If I say there is more than one grain of sand on the beach, is that wrong or untrue? I think we can develop true mathematical models, despite the challenges presented by reality.

Relevantly, this also applies to type systems and abstractions.

If PL is math then I guess I

If PL is math then I guess I am math also. Note that neither of these statements make any sense.

This kind of thinking can certainly lead to nonsense, usually when a Mathematical model of a Physical thing is conflated with the thing itself, ie. the map is not the territory and I am not math.

However, in this case there is actually a choice of which is the map and which is the territory. I guess you think of programming languages as Physical artifacts (ie. a phenomenon which emerges when circuits execute compilers); in which case math is a useful map for studying the phenomena's terrain.

However, it is also possible to think of programming languages as Mathematical entities, of which real, physical software on real, physical machines is just an approximate model. Here our programs are a map of some unattainable, Platonic territory.

Your claim is perfectly valid in the former sense, where programming is not math, but I was speaking in the latter, where programming is math :)

If a PL was just a

If a PL was just a mathematical entity, then we could construct one to solve our well defined mathematical problems; but neither is the case. Our problems are not well defined using math alone, and so neither are PLs.

Coq?

Isn't that exactly what Coq and Twelf do?

Those are more like theorem

Those are more like theorem proving tools than programming languages. Useful, but not universally so.

Vague Objection

More like? You write programs in Coq. The program is the proof of the theorem. "more like" is not really an objection. They are programming languages, and even Haskell works like this:

f :: a -> a -- hypothesis: there exists a function that has type a -> a
f = \x . x -- proof by concrete lambda calculus example.

All Coq and Twelf do is add a more sophisticated type system, and some meta programming tools to auto-generate common programs from types.

Your original statement was "If a PL was just a mathematical entity, then we could construct one to solve our well defined mathematical problems;" Well Coq and Twelf are proof by example, how useful they are as general purpose languages is not relevant.

To say "programming

To say "programming languages is math" is a strong universal statement. If you said something weaker like "some programming languages are math" (like say theorem proving systems), then of course, you just have to construct an example to prove the existential.

Even Haskell is not math. It has the capability to do math for sure, and its design is greatly influenced by math, but Haskell is a full on programming language with many design and implementation choices that are not mathematically justified, and it doesn't really make sense to think like that.

Mathematicians work at a very high level of abstraction. When we hit the real world, however, math is useful but by no means pervasive.

Programming languages are not the real world

Programming languages are abstract mathematical entities, they are not the real world. A business logic object modelling a sales workflow is not the sales workflow itself, it is a mathematical model. It may be written in a clunky, ill-defined, inconsistent mathematical modelling language, but mathematical model it is none the less.

programming languages are

programming languages are used to write real programs that run in the real world. If you think computation is just math...well, that is very Stephen Wolfram of you, but in my mind, this is about as close to math as human behavior is.

Maths vs Engineering.

I agree with all your points, but disagree with your conclusion. Consider a stock control system, the program is a pure mathematical entity, with a finite number of states a given stock item can be in. Reality is continuous, and there are an infinite number of possible states the item can be in (left on the radiator by the warehouse managers desk).

The program is a mathematical model of reality, and the study of making useful models of reality is engineering. Maths is pure and absolute, reality is not, and to get the two to work together takes a lot of compromising.

Reality is continuous, and

Reality is continuous, and there are an infinite number of possible states the item can be in

That's debatable, but I still agree that math does not necessarily differ from reality, and that programming is a very weak form of proof search.

but Haskell is a full on

but Haskell is a full on programming language with many design and implementation choices that cannot be mathematically justified

What choices are you referring to? What would constitute a mathematical justification for these choices in your mind?

If a PL was just a

If a PL was just a mathematical entity, then we could construct one to solve our well defined mathematical problems

I disagree with this argument. The Mandelbrot Set is just a mathematical entity, but we cannot construct it (in a physical sense).

I used the word "Platonic" to hint at this; ie. programming languages can be thought of as perfect, idealised and unconstructable objects, in the same way as perfect spheres or number lines.

Also note that I wasn't making any claim to truth. I was distinguishing between two different, yet equally valid perspectives which, like most philosophy, only differ in their definition of words.

Wittgenstein

I like the idea that philosophy only exists due to the confusion of meaning caused when the same word is used to mean different things in more than one language-game.

In PL design this is something to be avoided.

Philosophy is

Philosophy is the study of stuff we haven't got a clue about. When any actual progress is made on such stuff, it ceases to be philosophy. Which is why the top-tier science degree is still called "Doctor of Philosophy". As time goes on, the average intractability of questions in philosophy increases, and the frequency of shifts out of philosophy decreases. (I've heard it suggested AI is to CS as philosophy is to science.)

Perhaps PL is philosophy, and just aspires to be math.

The rest of programming

The rest of programming language research is about as much math as type systems for programming are math. Both have math components (e.g. dynamic & static semantics) and non-math components (e.g. usability).

Curry-Howard Correspondence is Overrated

It isn't that useful in deriving type systems for programming languages. It leads to logic systems infecting programming language type systems. For instance, I have yet to be convinced that rank-2 types are what one wants; it seems to boil down to programmer directed type inference.

I've said this before, but

I've said this before, but — Imho Curry-Howard is also overrated on foundational insight. There's a deep foundational contrast between logic and programming, in that whereas it's a big deal in logic that, if A is the set of sets not containing themselves, then A both does and does not contain itself; it's not a big deal in programming that (define A (lambda (x) (not (x x)))) (A A) recurses infinitely. Since I don't see Curry-Howard providing insight into this contrast, I take Curry-Howard to be superficial.

EVERY LC TERM IS A PROGRAM!

I agree. Lisp corresponds somewhat to untyped LC. And I cannot emphasize the above observation enough:

Every lambda calculus term, with a rewrite strategy, is a program.

The Curry Howard Correspondence is a worthless observation in that context. For programming languages you want type systems which accept, or reject, plausible, or implausible, programs. I don't have the feeling that that notion corresponds to types being instances of higher-order logic formulae.

(Why would one like a type system for LISP? It'll just need to accept everything. Seems a bit superfluous to me.)

Compare and Contrast

Your conclusion is a bit ironic. Curry-Howard provides striking insight that those two examples are similar. Without similarities, you wouldn't have a reference point by which to observe a meaningful contrast.

Curry-Howard provides

Curry-Howard provides striking insight that those two examples are similar.

Hmm. My perception that they're similar is based on my common sense, owing nothing to Curry-Howard. If Curry-Howard is able to set up a correspondence between them, I wouldn't call it insightful since I could already see they were similar.

Whatever

John wrote a meaningless program in Lisp of which I am not even sure what it has to do with CH. You can either A) give it a type, or B) not give it a type.

Now what?

it's a big deal in logic

it's a big deal in logic that, if A is the set of sets not containg themselves, then A both does and does not contain itself; it's not a big deal in programming that (define A (lambda (x) (not (x x)))) (A A) recurses infinitely

It's certainly a big deal if you're trying to write useful programs, as useful programs terminate (or are productive when coinductive).

One is a motive for

One is a motive for some of the greatest mathematical minds of the first several decades of the twentieth century to undertake major projects rethinking the very foundations of their subject. The other is a motive for somebody writing a program with that function A in it to be sure not to call A with a parameter it doesn't know how to handle.

Sounds like a weak argument

Sounds like a weak argument to me. For all you know, that programmer is writing nuclear launch software, or emergency quarantine protocols for a deadly pathogen lab. The ongoing survival of the human race sounds pretty important.

It's not an argument at all.

It's not an argument at all. It's an attempted clarification of what point I was making — and apparently not a terribly successful one, as it seems you're missing my point (and, as the party attempting to explain what I mean, that's on me). There's a fundamental difference in scope involved. The possibility of constructing an antinomy by use of certain axioms called the use of those axioms for anything at all into question. Most programmers do not perceive the possibility of defining a function that recurses infinitely as having such broad consequences, but rather, as indicating either that that function should not be defined, or that it should not be used in certain ways.

Type Theory is easy to compute, since it's just a form of LC

Right.

Has used Type Theory to write a Darcs-like version

Yah. This falls in the "If no-one knows what one's talking about then one can always pretend" category. (Pun intended.)

Change of mind: Type theory is philosophy

Guess you'all just have proven that with the above thread.

In sooth

Yea, verily. We seem also to be rather nicely demonstrating that programming language designers are philosophers. Considering our central concern is devising ways to specify how to do enormously complicated things, perhaps the philosophical bent oughtn't be suprising.

Two undefined terms and a question

Is type theory math?

Yeah well. Nobody really understands what type theory is, apart from that it has application domains, and nobody really understands what math is, apart from that you can create nice LaTeX over it.

So, of course, the above question results in philosophy. Philosophy is defined as: I state some indiscriminate bullshit, you state some indiscriminate bullshit, and then the knives come out.

Mind you. I do like philosophy for the war-faring attitude of those who partake in the narcist chicanery. It's so primal.

So yeah. Of course a question over two undefined concepts ends in philosophy.

philosophy runs close to the surface

The first time I had a philosophy class in college (long ago) I was surprised by things people said about their world views when asked a question not about easily checked immediate facts. Now I expect folks to say surprising things when asked open questions. It's why I ask. :-) I know if there's an answer I'll be surprised.

This time the opinion surprising me most was "programming is math." I experience some of programming as applied math, but not more than about 25% of the total. Other parts seem to fall under categories best described another way. Parts involving emergent chaos are hard to fit in nicely defined buckets. Diagnostics and operations research don't seem much like math. Several parts involving cooperation with others can be described by terms from humanities and psychology. Aiming to be understood doesn't seem much like math.

When folks want models with nice mathematical properties, what I miss most is a formal treatment of error bars: a quantifiable characterization of wrongness in actual model behavior. (I took a class in fuzzy logic once, but it was miserably about what you can still prove anyway, as opposed to dealing with shades of gray in correctness.)

What is math?

This time the opinion surprising me most was "programming is math."

Rys, IIRC you're typically concerned with low-level programming languages, so I'm not surprised that you would be skeptical of this connection. And yet, I'd wager that most programs you write are designed around preconditions, postconditions and invariants derived from inductive and deductive properties of an abstract model of your solution. Sounds mathy to me, even if it mostly happens in your head.

Math is also a collaborative process, so those humanities/psych terms you mention apply to math as well. The math publication and conference process seems to be all about aiming to be understood, so that's not a property exclusive to programming either.

Furthermore, math is increasingly beginning to include automated tools to generate and check proofs, and so is becoming more like programming, just as programming is including more sophisticated analyses so as to generate and check program proofs of ever greater sophistication. Perhaps they'll never meet in the middle, but it's clear there's commonality, at least in spirit.

In closing, whether programming is math depends on what we consider to be math, a question for which there's philosophical disagreement.

Binary Logic

Don't forget a CPU is built from binary logic gates. The CPU is built from maths, and performs mathematical calculations.

One could view the computer as the projection of maths into the real world (using IO devices).

Programming

I don't think the act of "programming" is maths. I would describe it as an engineering discipline.

I think there is some confusion between programming-language = math, which is that a programmer is building a mathematical model, and the act of programming = math.

I think there is some

I think there is some confusion between programming-language = math, which is that a programmer is building a mathematical model, and the act of programming = math.

Is writing the paper for publication still math? Or was only the preliminary work building the formal model the math? It seems "math" contains significant informal aspects used to communicate with other mathematicians. I don't think shoehorning "math" into the strictly formal part really captures the mathematical process.

A program is simply another communication medium for conveying mathematical models (albeit a crude one in many programming languages).

Taking a broad view of what

Taking a broad view of what math is, I would agree. But this is "programming is just math" statement is often used to criticize those techniques that are not rigorous but still effective. So I would just avoid making any claims about (what is/what is not math) as its meaning is just too vacuous.

I do not actually know much

Unlike Andreas Rossberg and Matt M I do not actually know much about type systems, but it seems to me that type systems are not that different than other aspects of programming language design in terms of whether they are math or not. Is the syntax of a programming language math? You could argue that it is because you can specify it using context free grammars, which is math. You can specify type systems using sequents, which is math. But both also have non-mathy aspects: you want a syntax to be readable by a human and you want a type system to be usable by a human.

Perhaps type systems are a little more mathy than other aspects of programming language design, but it's not the case that type systems are math and the rest is not math. It's a difference in degree.

p.s. I think if you asked a mathematician about the math in type systems and in programming languages research in general he would say that most of it is utterly trivial math. Type soundness proofs are some of the dullest and least illuminating proofs in existence. From a PLT perspective this is a good thing because it means that your type system probably isn't very hard to understand.

maybe types just seem more mathy to me

Thanks for giving it a shot. Each language has a semantic notion of type for values, however represented, that vary in how much a programmer can ignore them. Taken as as group, a language's types form a system, which might be inconsistent and ugly, or well-organized and elegant. Comparing type systems in different languages might be comparative typology. Analyzing type systems formally (another meta level removed) gives you theory of types in the abstract, which might not have much to do with a specific language, especially if its types are informal and vaguely chaotic as a system.

Anyway, I find value in your answer, though I might see more continuum from concrete type to more abstract type system, to more abstract still type theory. Somewhere on the continuum it becomes a lot more like math, and simultaneously less like the concrete types in a specific language. When we cut things into a map vs territory distinction, there's math in models of maps, but less so in pragmatic management of production systems built from buggy parts with (at best) weird specifications of component tolerances. We might get better specs deriving from efforts involving math applied to the maps. But grasp of those specs, in minds of programmers using them, is still going to matter.

There's a marginal return value issue at play in more than one place. I wish there was some formal treatment of the margin too when math is brought to bear in model design.

I started to reply, but

I started to reply, but didn't think I had enough interesting to add. One thing I could say is you'll find highly technical discussions in many fields that seem at first glance to be making mountains out of molehills. Usually there are good reasons (and some bad ones) for that.

I'm not sure I agree with your characterization of what people do as studying type systems "in the abstract". Most of the discussion in the thread that (I assume) prompted your initial post here was about a couple of specific type systems. These type systems were discussed somewhat abstractly, I guess, and with a math notation, but that's just in an effort to focus on the relevant with a compact notation.

In general, I don't know of many people who's work I would describe as studying type systems in the abstract. Self described "type theorists" probably study a number of specific type systems, probably including dependent type theories. The reason for people here to be comparing and contrasting various type systems should be obvious: as language designers, we need to figure out which properties we want in our type systems.

And BTW, Andreas knows much more about type systems than I do.

Same here

Same here. It is a rather broad question.

I think my short answer would be a comparison to architecture: if somebody is designing a building, you'd expect that he has a good understanding of statics, and can do the math involved. You don't need to understand all that if you are just a tenant, but you sure wouldn't move into a house designed and built by people who didn't.

Similarly with languages. A proper language designer should know about statics (i.e., types and semantics), and be able to do the math (i.e., proofs). As a programmer, you don't have to care about all the details, but you don't want to write sensitive software in languages designed and implemented by people who didn't.

So, type structure is to languages what statics is to construction. And type theory is to type systems what math is to statics.

That's how it should be, anyway.

I completely agree!

That'll show those pesky uninformed non-academic language designers. Good for you!

"Hang from a tree!", I dare say. Let's get rid of C, C++, Java, Php, Python, Lua, and all other nuisance simpleton languages. (Does C# make it into that list?)

Hm

Lua is a bad example, as it was developped by people with academic experience in language design -- and it's actually quite a decent language.

I'm less familiar with the history of some the others language you quote, but I'm quite confident that many mistakes could have been avoided in Java, PHP and Python, had the designers had a serious conversation with an experienced language designer at the time.
Note that the resulting languages would probably have been relatively close to went out, only with a bit less quirks (for example, in Java: dynamic type checking on array assignment; Python: weird scoping rules; PHP: about everything).

So what?

Dude. Everybody has academic experience in language design. It's another non-term. We, again, could have another non-discussion about.

Language comparisons usually boil down to taste. Java excels at some problems and is a pain for other problems. Personally, I kind-of hate it (after been forced to apply it to the wrong problem domain.)

I don't think ocaml can learn, the original, Java that much in the range of language design though. Nice, simple, clear design? KISS?

Weird scoping rules

Python: weird scoping rules

I agree that Python's scoping rules are tricky and confusing and error-prone, but honestly... what are the alternatives for a dynamic language that doesn't want to burden programmers with variable declaration?

JavaScript is even more error prone with its default global scope. Lua requires declaration of (local) variables. CoffeeScript handles scope automatically and is quite close to ideal, but some people complain about not being able to shadow global variables. Julia fixes this by allowing variables to be explicitly declared as local, but programmers can still accidentally modify global variables if they are not careful.

PEP 3104

In short: PEP 3104, that introduced the nonlocal construction, should never ever have happened.

My personal opinion is that requiring declaration of variables is the right solution. I understand that people may have different opinios (eg. Coffeescript indeed has a radically opposed stance), but inside any given conceptual system there may be glitches that need to be fixed. nonlocal is such a glitch.

My point was that it would have be found and shot dead as soon as you tried to formalize Python's scoping rules (which is not all that difficult), instead of painfully waiting for examples to be found in the wild, and having to choose a subpar solution to preserve compatibility.

PS: maybe some people have a romantic picture of "involving academic language designers in Python's design" as something that would result in some grand variant of Python with delimited continuations and dependent types and academic publications (kind of what happened to Java generics). I was rather thinking in the small: a formalization is good at fixing the little mistakes that are bound to become pain points as the language gets used more widely. If you're a hobbyist language designer and you get three month's worth of a good "academic language design" graduate student, you shouldn't expect the former grand changes, but could get very good value out of the latter small changes.

"Automatic scoping" is a dead end

CoffeeScript handles scope automatically and is quite close to ideal

I think it's safe to say that there is no such thing as "automatically handled scope" that actually works, and in particular, scales. JavaScript is one of the more recent examples that relearned this lesson. (And I'd argue that CoffeeScript is worse than JavaScript wrt scoping.)

No alternative

honestly... what are the alternatives for a dynamic language that doesn't want to burden programmers with variable declaration?

I agree with gasche and Andreas. Honestly... there is no alternative. Automatic scoping is a mistake and a blunder. It is a bad design decision, plain and simple, and it should be avoided.

affordances vs math

The analogy with statics is interesting. I want to focus on how much math occupies attention of typical users vs experts designing systems and buildings. (So this is a disposable aside: math for statics is simple, requiring just vector algebra. In contrast, math for types seems more complex, perhaps because it's less geometric?)

And type theory is to type systems what math is to statics.

Thanks, I wish we all used analogies more often. (I don't though; why is that? Maybe because it always blows up in my face, by undermining whatever I wanted to discuss, because all attention goes to the analogy instead of my point.)

When a person walks into a building, math typically plays no role at all. (Unless you're the sort who argues a dog solves differential equations when catching frisbees.) It's all about affordances offered by various cues, many of which are standardized, e.g. doors are seldom surprising when a standard door role is intended. I seldom think about statics unless the wind blows really hard, or unless everyone goes outside on the balcony at the same time. Some folks never, ever think about statics, and they can still use buildings. I suspect tract housing developers don't think about statics either, and just build to code.

In contrast, programmers deal with math a lot, even if it's the ordinary stuff we do without thinking much, as naasking noted above (that designing why any model works is mathy). Types potentially interfere with affordances in grasping why, what, and how (when coding) unless you happen to be an expert in the type system at hand. I'm not sure a programmer can use a language without grasping what types mean and how they get managed. So doing the math is required if you're coding, but not when you walk into a building. So the analogy has something subtly off there, if the point is that no one does the math except when designing the system.

Thanks for responding to my question. Jules and Matt gave you a perfect Strong Muldoon setup. (Google "glory road strong muldoon" for text.) I was kind of hoping you would defer to yet another person with stronger type system knowledge, just to be funny.

Reduced surface area

Programmers that use type systems need less maths than designers of type system that use type theory. The reduction in complexity from the designer to the user is maybe less striking than with buildings (and it certainly remains to be improved with better, simpler designs), but it is still present.

I think there is some incompressible level of reasoning that programmers will need and should be accepted. I do not think, for example, that contravariance (the reversal of direction in the fact that Function is less general than (a subtype of) Function when T is *more* general than T') can or should be hidden from the programmer, despite various attempts in existing language designs.

Then it becomes a matter of education. To make some analogies, people accept learning to read and write, and that is not called "literature". I observed with students (in other fields than mathematics) that there is a quite subtle boundary in what "is math" or not: simple proportional reasoning is an okay everyday-life activity, but add a substraction on top of it, or anything that requires writing down a formula, and you fall into the territory of repulsive "maths".

So I'd say responsibilities go both ways:
- designers of type system should work towards making them as simple as possible, and avoid "abstract leaks" from complex correctness arguments into exposed usage
- users of programming languages are expected to understand some fundamental concepts about program reasoning

I've observed that the knowledge level of communities can evolve with time. In functional programming circles, possibly due to the influence of proof assistants and advanced GHC features, there is a vocal subcommunity of users that desire advanced type-level features (eg. higher-kinded type constructors) that were considered "too advanced" for real-world usage a few decades ago.
For a concrete example: around 2000 the designers of the OCaml object system (which is based on row-polymorphism) were careful to actually hide row variables to the user, but using a slightly restricted system (that could write all object-oriented examples they were interested of at the time) that never needed to show a row variable to the user, on the basis that having type variables of different kinds would be too confusing. If that was to be re-done today, I would guess that we would expect users to understand row variables just fine, and would keep the general system.

Only a small minority of

Only a small minority of programmers use functional programming languages productively (e.g. beyond studying them in school), so I think...what the average functional programmer "knows" has advanced while much of the rest of the world...have dropped Java as too complicated for something like Python. I think much of the reason that Scala has failed to take over the JVM world relates to the complexity of its type system.

I still think the right solution is to go backwards (prescribe rather than restrict), though the solution seems to be equivalent to an expensive semi-unification alias analysis.

Indeed

Certainly. I mentioned designers simplifying designs, and working to raise the bar of teaching to professional programmers, but there is also a fruitful area of progress to enable less-trained, or even completely untrained, people to program productively. And incrementally migrate applications from the "useful small script" to the "critical" ends of the programs spectrum, while understanding and controlling the evolution of the required expertise.

Programmers that use type

Programmers that use type systems need less maths than designers of type system that use type theory.

This is probably a key reason I mistrust typing: I see all programming as language development, so  (a) I'm suspicious when language users aren't being asked to do something that language designers do, and  (b) I'm suspicious of language design that requires an advanced degree.

Amen. Transparency is still

Amen. Transparency is still important even if abstraction is desired, but this goes all around (types, libraries, runtimes....).

yes+no

if the abstraction is not leaky, and i really don't have to grok it beyond my Regular Joe uses, then OK! but in reality have we ever seen such a clean abstraction firewall between language developer, and language user? hah! no.

Desirability of Firewall

It's not even clear to me that such a firewall is desirable.

There are folks who aren't going to understand the implementation details with or without a firewall. A great deal of work is accomplished by those that operate in a perpetual gray area of understanding. Meanwhile, those who are interested in the details need access to internals in order to observe them and develop understanding.

I'd rather provide tools for understanding and manipulating a series of firewalls, where any individual programmer may operate on either side of any particular firewall for any particular subproblem he tackles.

Type soundness proofs are the dullest and least illuminating...

That nicely explains why most FP languages have typing holes?

Yes, type theory is math

Any time you are dealing with acausal entities that exist outside of space and time it is math (but I guess that is a philosophical point)

The basic framework I use for thinking about languages and types is given the set of all possible programs in a given language you want to divide them into good programs and bad programs (exactly what constitutes good and bad and how many levels of good and bad there are is open to discussion). A type system is a formal approach to classifying programs. A type system is sound if only good programs are well-typed and it is complete if every good program is well-typed (or maybe at least typeable). So a type system that is both sound and complete precisely captures the desired notion of "good".

There isn't really a strong concensus on how to define any (much less all) of these concepts and the answers (or inexistence) of answers greatly depends on your definitions. In general people define "good" as having well-defined semantics and type system are generally much closer to being sound than they are to being complete. In my opinion the principle advantage of a type system is that "good" is generally defined in terms of the possible outcomes of program execution and is thus inherently dynamic while a type system is a static judgment based on the structure of a program rather than its meaning.

philosophy of the programming aether

I see the value of type systems, and why we want them. I mainly have trouble with theory when we might expect it to puzzle folks in the part of the bell curve within a couple standard deviations of the middle. (Abstruse is not a complimentary word in my vocabulary.)

acausal entities that exist outside of space and time

I like that phrase a lot. :-) If we force-fit a couple more words, we can get an acronym pronounced aether-oost.

Many roads to programming language research

Programming is a rich field, and programming *research* reflects this diversity. There is interesting work to be done on programming languages with tools coming from social sciences, ergonomics, user interface design, linguistics (maybe?) and, yes, maths.

Making better programming languages is far from a solved problem, and I think everyone should be free to attack this issue from whichever angle they prefer. The angle *I* personally prefer is to find questions about programming problems that can be stated *precisely* and answer them *objectively*. Math is precisely the tool to do that.

Can all problems about programming languages be solved by finding which theorems to prove and proving them? Probably not. Would we miss interesting work if we (collectively) only funded this form of research? Certainly. But it should be self-evident today (with the Coq proof assistant awarded the ACM Software System Award) that this branch of work has allowed us to make tremendous progress in the field, and is as active as it was at the beginning of the *previous* century.

Type theory is a particularly fertile way to think about programming languages and prove general and beautiful things about them. It also happens to have deep ties to mathematical logic, which makes for nice collaboration. Collaboration also happens across subfields of programming language research: I'm interested in functional typed languages, but I've been using tools (focused proof search) developped by people that want to design the ultimate Prolog. We can talk the same language thanks to that common heritage of mathematical formalism.

Why aren't types just propositions?

I don't have a background in mathematics or logic, so I struggle to understand a lot of type theory; it seems to come from a very alien world to programming.

One thing I fail to understand is why we need the notion of 'type' as something fundamentally separate from 'logical proposition'.

In other words, how is 'int a = 32' something fundamentally different from the (Prolog) logical assertions:

variable(a);
integer(a);
integer(32)
has_value(a, 32);

?

(where 'a' is not a Prolog logic variable, but just a symbol in our knowledge base we're asserting something about)

In Prolog it seems like one can do a lot of computation perfectly well without the use of a type system as anything different from the actual first-order program code. If that's the case, shouldn't we be widening our thinking, and looking at computation not in terms of types, but in terms of logical propositions that we can make about the data in our programs at various points?

And if we can do that with types, shouldn't we ultimately be thinking about reducing everyting in our programs to logical assertions?

It seems like it would be a lot more useful to be able to assert arbitrary things about the components of our software like:

is_integer_between(x, 1,10);

rather than just

is_integer(x);

And if we can do that, why can't we make and infer more interesting constraints like:

requires_library(my_program, frob_graphics_library);
requires_hardware_spec(my_program, RAM, 32, megabytes);
requires_realtime_guarantee(my_program, 10, milliseconds);

and so on? If the entire program were a bunch of logical assertions like this, it seems like this would let a [sufficiently smart compiler] do a lot of the work we now give to fallible humans.

Obviously there would be constraints on just how [sufficiently smart] a compiler/preprocessor/assertion-checker/hardware-fabricator/etc could become, but why aren't we heading further down this path?

I've wondered

I've wondered about the difference between types and propositions myself. It seems to me types are a particular way of structuring propositions, that (like all choices of structure) when carried to extremes becomes perverse. One ends up twisting things all out of recognition to shoehorn them into this form that's unnatural for them. That's one of my theories about where the concept of types goes wrong. (I'm also cynical, btw, about logic programming.) I've wondered about adding first-class theorems to Lisp, for various reasons.

They can be

This is more or less what I am doing. I am using a logic language (Prolog like) at the type level, and inferring the types (propositions) for the functional code. For a literal like '3' the inferred type would be the nullary proposition 'integer' (using lowercase for functors, and upper case for variables as in Prolog syntax). You can add arbitrary asserts like:

not_zero(X).

and you can define 'classes' like:

is_mylibrary(X) :- has_function(X, name(f), type(a -> b)), has_function(X, name(g), type(a -> a -> a).

The language will not need any special syntax for type-classes etc. The compiler has a prolog-like interpreter built into the type-system (infact the type inference can be expressed neatly inside this interpreter once renaming is taken into account - I am looking at alpha-prolog as a simpler alternative to lambda-prolog for this). However I have written my own interpreter which is based on fluents, so I could just add a fluent for renaming if that is simpler.

Soundness

I'm doing something similar (customizable typing rules with "compile-time prolog"). Do you do anything to check the validity of the inference rules the user enters? (Or are they somehow correct by construction?)

what could be checked

I am not sure what could be checked. Termination analysis will check the goal history stack represents a semi-order that is decreasing... I am looking at the kind of checks Twelf uses. Twelf is lambda-prolog at the core of its type system (restricted to a decidable subset if second order logic). I would rather keep things first order, which is why alpha Prolog seems attractive as the main need for higher order logic seems to be renaming.

What other sorts of checks would be possible (other than those implemented by Twelf)?

Depends how much it knows

If you tell it what the judgments mean, then you can prove inferences correct.

rank-0 intersection type system

Strange you should mention that. Look at the following equivalence (this is going beyond the simple definition of the algorithm and into the meta-language I am developing):

{x : a, x : a -> b} |- x x : b

x_x(B) :- x(A), x(A, B)

I might just use the logic notation for consistency... although the direction difference between ":-" and "|-" is interesting and I suppose therefore the following notations should also be considered:

{x : a, x : a -> b} -: x x : b

x_x(B) -| x(A), x(A, B)

Which is a bit like:

x_x(B) <- x(A), x(A, B)

Which I have seen somewhere (can't remember where).

So the type inference will produce a logic program, and this cannot fail, so a complete logic program will be generated even for badly typed programs. The logic program is then evaluated, and successful evaluation indicates it passes type checking. A failure would be a type checking failure, but we get the whole logic program to help understand the error and build the error message.

Something that just occurs to me is that the functor names could directly be AST fragments. There is obviously some relationship between the typing rules and the rules of the logic program, but I am not sure what it is for example:

3 node AST (1) x, (2) x, (3) @ , with the structure 3 (1 2) would have types:
1) x'(A)
2) x(A)
3) x@x(B) :- x'(A -> B), x(A)
where '->' is just an infix functor

Note: in order for this to have the desired meaning we have to rename each occurrence of x to be unique, but if this name is the AST node (IE the address of the AST node in memory) it will be by definition.

One thing I fail to

One thing I fail to understand is why we need the notion of 'type' as something fundamentally separate from 'logical proposition'.

One reason is that types are typically decidable propositions. If you want full blown propositions, then you can use something like Coq. But now you'll probably have to write proofs to ensure your undecidable propositions hold.

proofs are programs.

As types are theories, the proof is a program. As most function types are really existentially quantified (hypothesis: there exists a function that takes an Int and returns a string) the proof is one example function built from the language primitives.

In my approach the inferred types are from a decidable subset, so without additional logic programs at the type level everything is decidable. We can also do termination analysis of the logic program, which although not trivial has been done.

Types usually are propositions

I don't know if you've encountered the Curry-Howard correspondence, but it's a viewpoint that allows you to consider types and propositions as the same thing. But this is not what you're asking! You're asking why types aren't generalized to parameterized propositions. i.e. why not view a type as a predicate on programs. I like this idea, too.

I think a big part of why this hasn't happened is the beauty of the CH correspondence. And in general, type theory just has a nice mathematical theory. On the other hand, the approach of types as predicates didn't really have a corresponding simple theory, that I know of. It was known early on that set theory (just treating functions as set theoretic functions) couldn't account for polymorphism.

On the other hand, the

On the other hand, the approach of types as predicates didn't really have a corresponding simple theory, that I know of.

This sounds like the work in refinement types. Something like Liquid Types.

A difficulty with the

A difficulty with the Curry-Howard correspondence is that programming is a practical discipline. CH is an example of a mathematical "equivalence" that establishes, in practice, equivalence for some purposes by ignoring other purposes. Like the Church-Turing theorem showing lambda-calculus and Turing-machines are equivalent, which is indeed deeply insightful for some purposes, yet programming is much more practical in Lisp than in Brainfuck. <Refrains from an engineer-and-mathematician joke.>

A claim

In thirty years, practical benefits of Curry-Howard (CH) will be evident on both sides: mathematics and practical programming language research.

Jean-Louis Krivine's models of ZF are the main candidate for a strong contribution of CH to mathematics (mathematical logic): new results that come from a fundamentally "programming" point of view. Understanding a computational meaning of classical logic was already a major breakthrough, but possibly to a more limited audience.

On the side of programming languages, things are maybe a bit less clear today. I would expect clear benefits to come from using the relation between the work on proof search (eg. focusing) and programming tools), and also the link between polarity in logic and evaluation strategies in programming. Many relevant citations would not be convincingly practical, but I personally like Paul Blain-Levy's work on Jumbo connectives: still theoretical, but gives a tangible feeling of helping us understand the difference between, say, strict and lazy pairs, and how they compose to form realistic programmer-friendly data-structures.

CH?

To me, the question isn't whether CH is useful, but how it should be fit into programming languages. Certainly it's useful to recognize that searching for a proof of a given proposition and searching for a program with a given type signature are the same activity. Does that mean we should be working in a typed calculus where well-typed functions are the only values in the domain of discourse?

Proof search ~ pure term search

Even if you work in an impure programming language, it makes sense to invoke a "program search" process that you know will only look for pure programs. In practice I suspect this is what you actually want: you probably don't want the inferred code to start with a `print` statement.

Answer Set Programming

I am looking at incorporating ASP at the type/logic level to allow auto proof/code generation for NP problems. For example the ASP formulation of the map colouring problem is very simple and intuative. This fits with the general language design of an imperative value level language, and a logic type level meta-language. You specify the details of how to do things in the imperative language, and what you are trying to do in the logic language. If you just want to program at the value level all types are inferred. If you just want to program at the logic level the value code us generated (where possible, and modular extension for different proof search engines for future improvement). Finally you can express what the code is trying to do at the type/logic level, and and provide an implementation at the value level and have the type system prove it is correct (pessimistic assumption that unprovable is wrong). Ideally you could insert logging into the value level code without changing the proof.

Impure program search

There are actually a lot of use-cases for seeking impure programs, or to use impurity during the search - e.g. in robotics or a-life to automatically construct and calibrate motion programs and sensors.

Unfortunately, CH doesn't seem to offer useful concept support for these use-cases. By nature, you can't tell whether a calibration program is 'correct' just by looking at its type, because said type must admit many possible calibrations. The searches end up being implemented in an ad-hoc manner. I think most effectful programs have similar aspects, where correctness depends heavily on context.

But I agree that there can be a lot of utility searching for pure subprograms.

Impurity wasn't my point

If effects are properly typed / monadically contained, impure programs can be inferred just the same as pure ones.

My point is rather that programs don't correspond to proofs -- typed programs correspond to proofs. And giving a program a type already requires a proof of sorts that the program has a certain shape. So what is the denotation of a program before it's been shown to correspond to the proof of any proposition? I think it's natural to be able to have such a denotation and to be able to ask the question "does this program have this type?" as a proposition.

You're right that this isn't

I don't know if you've encountered the Curry-Howard correspondence, but it's a viewpoint that allows you to consider types and propositions as the same thing. But this is not what you're asking! You're asking why types aren't generalized to parameterized propositions

You're right that this isn't quite what I'm asking.

I've encountered mentions of the CH, yes, but I don't have a deep understanding of it. Or even a particularly good surface understanding.

What I think CH is saying (correct me if I'm confused) seems to be that the type structure of a functional program is the same thing as a logical proposition, and the execution of that functional program is the same thing as a proof of that proposition which represents the functional type structure.

So there are two separate static computational artifacts in CH: one, a program, and two, the type structure of that program; and two separate ways of viewing the process of running that program: as 'evaluation of functions' and as 'proof of type propositions'.

This is possibly interesting, yes. But it still divides the program into two very separate (though dual) kinds of things: functions and types. This division however, feels artificial, like a reinvention of Cartesian dualism. It's certainly not how my mind works! In my head, and in language, I don't divide the world into 'statements' and 'statements about statements'. I simply have 'statements' and they can be about anything. It feels like the inside of my mind is full of a unified sort of 'mental stuff', where any piece of stuff can point to any other piece of stuff. Likewise, in natural language information collections like an encyclopedia (for example Wikipedia), there's not an artificial distinction between 'concepts' and 'concepts about concepts'; there are just 'concepts', and if you can put a name to it, you can write an article about it.

In mathematics, for example, the people who invented the concept of 'types' didn't themselves have a type structure in their head that prevented them from making statements about types! Otherwise they could have never expressed that thought.

This feels to me like an important feature of human thought, and as we move into the Internet age of connected data and knowledge bases, it feels like also an important feature we need in our basic computer languages: the ability to represent propositions about any kind of humanly imaginable relationship between any kind of humanly imaginable concept. Regardless of whether or not this causes paradoxes. Humans are talking about arbitrary concepts; humans are talking about arbitrary concepts online; therefore, humans will want to be talking about arbitrary concepts online in machine readable language.

In concrete terms, I'm interested not so much in writing functional programs with attached proofs, as in writing logic programs. In logic programming, there is only one computational artifact: 'the program', which is not a function, and effectively is its own proof. (Assuming that you use a suitably restricted and tractable proof mechanism, such as Horn clause resolution).

I don't understand how CH would apply to this situation at all; it doesn't seem to be saying much about logic programming, but only about functional programming. Functional programming may perhaps be a dual of logic programming, but CH doesn't seem (to me) to be talking about this.

The extension of CH for logic programming, it seems to me, would not be 'propositions about functions' (ie types), but 'propositions about other propositions'. Which seems, in Prolog, to simply reduce to 'propositions'. Not even second-order propositions; just ordinary first-order ones. (Maybe the Prolog meta-logical predicates break this picture; but systems like Golog and HiLog seem to be able to accomplish a lot of second-order-like stuff with ordinary first-order features.)

Have I that described logic programming correctly? It certainly feels when I enter the Prolog world as if a sudden clarity descends and concepts which in other languages are separate (though dual) and require twin architectures and languages of 'functions' and 'types' suddenly become unified into one much simpler and more low-level structure: the proposition. And that although that unification is not completely worked out, it feels like there's a lot more that can be done.