## Non-standard type theories for FP

I assume we all known HM style type inference.

As I understand it, HM has been extended with quantifiers to allow multiple instantiations of the same type, or hiding of known types. I.e., the following good old example only type checks if 'id' has type 'forall a. a -> a':

let id = \x . x in
(id true, id 1)


Great, problem solved, one would think.

My problem with the current definition of the quantifiers is the following. A bold statement: the types (1) 'forall a . a' and (2) 'forall a b . a -> b' should not exist! I find it strange for (1) that a term can be written which unifies with any given type, similar in case (2), where a result is produced which can get any type.

Let me define the predicate 'great!' as types I personally like.

A type 'forall a . a -> a' is 'great!' since you know that the first 'a' will be bound by an argument, and a term can then be generated which holds the same type. A type '(forall a . a) -> (exists b . b)' is also 'great!' since it takes a known type, and produces a value out of a hidden type.

I am wondering if the current quantifiers were too much inspired by HM [or type theory], and are a false solution to the multiple instantiation problem. So I am looking for non-standard approaches which tackle this.

## Comment viewing options

### Same for existentials

In a similar fashion: '(exists a. a) -> (forall a. a)' is terribly wrong since it binds to an abstracted type and generates a term of any type.

[I have been toying with types where I replaced the quantifier 'forall' with 'generalize' (with almost the same HM meaning ;-)) and where variables on binding locations introduce unknowns and on delivering locations introduce constants, but have not implemented that.]

[A person once told me about positive/negative occurrences of types in a type term, so I assume it is a known problem, but I have never found any references about it in literature.]

### Positive and negative subformulae

Talk of positive & negative subformulae of a formula is absolutely standard in logic; it is certainly in Prawitz (1965)'s Natural Deduction, and I guess it is in Gentzen. See, for instance Schwichtenberg's recent lectures on Mathematical Logic.

Talk of positive/negative occurences generally means to talk of formulae/terms in a derivation that are positive/negative subformulae of the conclusion, and may possibly also be negative/positive subformulae of the premises.

### False vs. malformed

There is nothing wrong with types like forall a.a, they are just not inhabited. In the Curry/Howard isomorphism (which is the heart of type theory) they correspond to formulas that are false. But being false is not the same as being malformed, and you should not confuse them.

Depending on your type system, there can be non-polymorphic types that are uninhabited, too. One example is the empty sum (a variant type with no constructors). Some languages even specifically add a void type (not to be confused with C's void, which is a big misnomer and rather a unit type).

You might also be interested in Wadler's "Theorems for free", which shows that polymorphic types tell you a lot about their values, even when they exist.

Aside: quantifiers are not "modalities".

### You gave the answer I wanted to avoid

Again the argument: I am wondering whether type-theory [for PLs] is stuck with the wrong abstraction. CH is exactly the reason why we got these quantifiers. I want to have some out-of-the-box thinking here.

The fact that you can write down a type which has no meaning can be understood as an indication that you're working with the wrong type system [from a pure programming language point of view unhindered by any knowledge of logic].

[To elaborate on the former statement: the FP languages are the only languages I know where it is possible to give types which cannot be inhabited. Is that a good or a bad thing?]

[I replaced modality with quantifier everywhere]

### Impossibly meaningful

The fact that a type cannot have closed inhabitants does not deprive it of meaning, since it can have open inhabitants, and these can be subterms of useful closed terms.

The plain HM type system has types without closed inhabitants, such as bare type variables. This isn't a problem that only comes with quantifiers.

### Elaborate

Can you give an example for the first paragraph, I don't understand this.

Well, one can understand bare type variables in a term as implicitly quantified. I don't see the relevance.

[We are not communicating here. You are thinking lambda-terms and CH-isomorphism. I am thinking programming languages with types, where all types should denote meaningful programs. I admit it is a very foolish thought, to let go of all that type-theory. But still, maybe it works?]

### Types as programs

Can you give an example for the first paragraph

I'll illustrate the point in another answer.

Well, one can understand bare type variables in a term as implicitly quantified.

But then the implicit formula is of the form (forall a. a), which you want to forbid.

I am thinking programming languages with types, where all types should denote meaningful programs.

Prolog allows propositions as programs', which can be considered a model for types as programs'. Here the bad' types are unsatisfiable propositions, but they are perfectly meaningful: they are programs that return no answers.

But the examples you started out from suggested you were thinking along the lines of Haskell, where programs inhabit types, and types do not tell you what program you have, but only something about how it behaves.

### Non-meaningful is not Harmful

In any sufficiently expressive language, including a language for writing types, it is possible to create utterances that are well-formed but useless. This seems to be one of the prices of expressiveness. You use the term "meaningless", but this is incorrect. The types you give are meaningful. A better term might be "useless".

I think that a better formulation of the issue is to ask: "Given that any sufficiently expressive type system can generate useless types, can we at least ensure that these types cannot be instantiated, or that if they are, one cannot go wrong with them?"

So let's consider two of your examples. The type:

(exists a. a) -> (forall a. a)

shouldn't be instantiable in the absence of some sort of opening construct, and in the presence of that construct it isn't particularly problematic. The type:

(forall a b . a -> b)

isn't inherently a problem, since the return value type is bindable at the point of application, but offhand I cannot see how to write a concrete procedure that would generate this type.

### I understood that

I think that a better formulation of the issue is to ask: "Given that any sufficiently expressive type system can generate useless types, can we at least ensure that these types cannot be instantiated, or that if they are, one cannot go wrong with them?"

Actually, that is what I was trying to do. For me, the question is partly whether HM/type theory derived system are not too expressive. I.e. current problems arise because they allow too many 'useless' types.

### Then my suggestion is...

Try to identify a type compositing operation that enables the generation of a useless type. Remove it, and see what useful types are no longer expressible.

I think you are up against a fundamental issue in expressiveness here.

### Yeah

I think you are up against a fundamental issue in expressiveness here.

I partly agree here. Most of the attempts I tried are modelling the abstract types of computations as '(LAMBDA a . F a)', i.e. trying to get rid of most of the unification issues [and uninhabited types]. The problem mostly is that I think you can base pretty ok type systems on that, say for C like languages, but it lacks the expressiveness, and convenience [readability], of the types of ML/Haskell-like languages.

My solution for now is just to look the other way and ignore the problem.

### Puzzled

A bold statement: the types (1) 'forall a . a' and (2) 'forall a b . a -> b' should not exist!

Why ever not? What is wrong with there being types that have no closed inhabitants? In mathematics, 0=1' is a perfectly intelligible proposition whose consequences we are free to discuss; we are only in trouble if we find a proof of it.

Postscript: Perfectly redundant, in the light of Andreas' post above...

### Wouldn't you like a type system where illegal types do not exist

Again, remember that from a language point of view, the forall quantifier was only introduced to deal with types which should be instantiated multiple times. The CH-isomorphism inspired a meaning for it. Did they introduce the right meaning for it?

Say I replace 'forall/exists' with one quantifier 'general'.

Then 'general a. a' = 'exists a.a', 'general a b . a -> b' = '(forall a. a) -> (exists a. a)', etc. [The position in the term determines whether a variable has an existential/universal meaning.]

Now I don't know if that would work. But it might be worth a try, and I am wondering whether other people tried such an approach.

### Question

If I understand correctly, your general quantifier is something like syntactic sugar for putting forall quantifers around its bound variables in negative position, and likewise exists quantifers in positive position.

We haven't got type inference rules for your system, but suppose we have a closed, well-typed term t of type (general a b . a -> b).

Should the term \x -> (t x) be typable? What type will it have?

### Skipped a part

Sorry, missed a part of your post. See post below.

[Not syntactic sugar but the quantifier people are searching for, and misusing 'forall/exists' for. If it works of course, maybe there isn't a better solution.]

### Let's try \t \x . t x

[I seem to have a rather strange habit of answering beside the point. Please read the next post below. ;-) ]

'general a b. (a -> b) -> a -> b' I guess.

The trick would be in the instantiation rules. You know 't' will take any term and produce a term of unknown type, so it would have the same meaning as 'forall a. exists b. (a->b) -> a-> b'.

### I'm out at sea

I can't make sense of the derivation you provide following for this. What I was interested in is what we would see in the type inference tree matching the subterm x', which looked to me as if it must have the forbidden type (forall a. a)', showing a role for this type in your type theory at least for open terms.

Since your general quantifier doesn't work the way I guessed (and maybe can't work the way you hoped, in light of the issue around prenex forms), I'm interested in what type the Peirce formula would correspond to under the general quantifier, ie.: (general a b. ((a -> b) -> a) -> a))?

Postscript: Sorry for burying the discussion with so many posts, but the discussion itself is so fragmented. Next time I should gather my particular posts together.

### So was I, at a lost island ;-)

Sometimes you need sparring partners to get a vague notion right.

I'll restate my vague notion, then go into your question. Let's look at the following definition a simple programmer might write:

f: a -> b


Now under standard semantics, we add universal quantifiers and assume he meant:

f: forall a b . a -> b


Makes sense, even if it is uninhabited, we don't care, that's normal according to CH. When implementing a HM type checker, everything seems fine. The universal quantifier introduces unknowns and skolem constants at the right locations, and the inference algorithm seems very natural.

But what did our simple programmer intend? Probably he was thinking set-theory, had some vague Venn diagram in his head, and meant:

f: forall a . exists b . a -> b


I.e., he wanted to state a type which takes something out of set A, and returns something out of set B, but he wanted to hide the the structure of set B.

Now, I find his interpretation of that type very natural. He just doesn't know how to quantify, and adding quantifiers is not his thing. Explaining why he should do that is of course possible, but messy; and he probably won't like it since it clutters his program with strange symbols.

Now I was wondering if he is not wrong, but, say, we are.

Is it possible we're blinded by CH and an elegant inference algorithm and forgot how to design type systems for programmers?

Is the assumption that it is normal to have uninhabited types in a language false? Why would that be natural? Did we make a wrong turn introducing 'forall/exists' into types? [Was the generalization of the HM type inferencer fixed 'wrongly'?] (Sorry for starting to sound like Sarah Jessica Parker ;-)

So, let's drop all we know and ask: What is the quantifier which would make types natural for our simple programmer, and also give an elegant algorithm?

One can try that. And we know we need a 'general' quantifier with a natural meaning and an elegant inference algorithm.

Can we type it such:

f: general a b . a -> b


I'm interested in what type the Peirce formula would correspond to under the general quantifier, ie.: (general a b. ((a -> b) -> a) -> a))?

Honestly, I don't know. I guess one could default to '(forall a . exists b. ((a -> b) -> a) -> a))'

[Btw. I do see your point ;-), but still, even if the type system has uninhabited types, it might be better for our simple programmer.]

[A different perspective. Our simple programmer wrote down a type which was natural for him. And probably what is natural is the interpretation with 'forall/exists' quantifiers which is inhabited.]

### Needed to ride my bike...

to patch together some equations:

We haven't got type inference rules for your system, but suppose we have a closed, well-typed term t of type (general a b . a -> b). Should the term \x -> (t x) be typable? What type will it have?

Yes, it should be typable. So, v for variables, v? for unknowns, v! for skolem constants.

v0? := inst(general a b . a -> b) // type of t
v0? == v1? -> v2?                 // (t x) has type v2?
v3? := generalize(v1? -> v2?)     // (\x -> ...) has type v3?


Simplify...

v0? == a? -> b!
a? -> b! == v1? -> v2?
v3? := generalize(v1? -> v2?)


Simplify...

v0? == a? -> b!
v1? == a?
v2? == b!
v3? := generalize(a? -> b!)


Simplify...

v0? == a? -> b!
v1? == a?
v2? == b!
v3? == general a b . a -> b


So, type 'general a b. a -> b'.

[Note that instantiation produces unknowns for arguments and constants for results. Generalization, of course, does the opposite.]

### Binding

If I understand correctly, your general quantifier is something like syntactic sugar for putting forall quantifers around its bound variables in negative position, and likewise exists quantifers in positive position.

Don't know if it works, but I guess normally it would be equivalent to putting all the quantifiers on the top, and universally quantifying around arguments, and existentially quantifying around results. [I guess we said the same here.]

I'm not sure, but I guess there are examples where the quantifier should be placed inside a term to derive the correct type. [An expected result from impredicative systems.]

### Prenex forms for intuitionistic type theory

In general, you can't find prenex normal forms (ie. logically equivalent formulae with all the quantifiers outermost) in intuitionistic logic; in fact the don't even "almost always" exist, because two properties of classical logic crucial to existence of prenex forms fail, namely De Morgan duality of the quantifiers, and the ability to express implication by means of disjunction and negation.

I think this means you have to rethink what you want generalise to do.

?

### History

Reynolds, a computer scientist, independently of Girard, a logician, discovered the polymorphic lambda calculus. It's not clear (to me) how widely appreciated the Curry-Howard correspondence was then, particularly in the computer science community. So I don't know if or how much Reynolds was inspired by CH though he clearly drew some inspiration from logic, but I highly suspect his thought process went along the lines: I need functions that vary based on type, how do we vary things in CS, we parameterize; type lambdas ho! Even without CH, the polymorphic lambda calculus is extremely natural in several ways.

Finally, if you did manage to formulate your ideas into a coherent system, it's extremely unlikely that there wouldn't be a logical interpretation of it via CH. This changes your claim roughly into: we are using the wrong logic.

### Unsure

Reynolds, a computer scientist, independently of Girard, a logician, discovered the polymorphic lambda calculus. It's not clear (to me) how widely appreciated the Curry-Howard correspondence was then, particularly in the computer science community. So I don't know if or how much Reynolds was inspired by CH though he clearly drew some inspiration from logic, but I highly suspect his thought process went along the lines: I need functions that vary based on type, how do we vary things in CS, we parameterize; type lambdas ho! Even without CH, the polymorphic lambda calculus is extremely natural in several ways.

I guess my thought was, how do we do things in HM, we unify, generalize and instantiate; only question is how. [And are we doing it the right way, at the moment]

[I was also a bit bogged by examples of ?Haskell? programs where the position of the quantifier determines whether a type is universally or existentially quantified and was wondering if there was an underlying general principle for that.]

Finally, if you did manage to formulate your ideas into a coherent system, it's extremely unlikely that there wouldn't be a logical interpretation of it via CH. This changes your claim roughly into: we are using the wrong logic.

Well, I guess that's already sure since that 'general' quantifier always has an interpretation in typed lambda calculus with 'forall/exists'. It then would just be a very restricted form usable for FP. (No idea if it would work, though)

[There was no claim, just a question ;-)]

### Speculation

I highly suspect his thought process went along the lines: I need functions that vary based on type, how do we vary things in CS, we parameterize; type lambdas ho! Even without CH, the polymorphic lambda calculus is extremely natural in several ways.

My guess is that Reynolds was thinking like a logician, trying to figure out why the lambda calculus couldn't accept a straightforward semantics in set theory. Isolating an intelligible typed counterpart to the lambda calculus that also couldn't have such a semantics provides a powerful answer to this question.

### If you look at his 1974 paper...

...the big motivation seems to be understanding what representation independence is. As far as I can tell, it seems like the main motivation for introducing polymorphism wasn't primarily to give a new abstraction mechanism (to write, for example, a polymorphic map functions, though he was aware of those applications), but rather to figure out under what conditions the client of a module could be said to independent of the module's representation type. So the client programs got modeled as a function polymorphic in the representation type -- this was an attempt to formalize/use Strachey's intuition that parametrically polymorphic functions could not "look at" values of the variable type.

This focus may explain why languages like CLU and Alphard supported type hiding (i.e., existentials) without otherwise supporting quantification.

John has mentioned to me that he actually thought that System F had a set-theoretic semantics until 1983, when he got so frustrated with his inability to give such a semantics that tried to prove the opposite, hoping that the failed proof would give some insight. To his great surprise, that proof went through.

### Uninhabited types can be useful

Even in languages where "forall a. a" is completely forbidden (ie, we don't allow infinite loops or exceptions), it can still be used as a parameter to a type constructor. For example, "List (forall a. a)" is inhabited by the empty list.

### I know

And '(forall a. a) -> (forall a. a)' is also a perfectly normal type for an identity function, although it is hard to find an argument for it.

Not the point, I was just wondering (loudly, I guess) if a system without those, or with other, quantifiers could in the end be more natural. (I am now thinking in the direction of an adapted Reynolds) [Although I have to admit I didn't state that really clearly, again ;-)]

I am not bashing CH, or all the work done in type theory. I am just posing questions.

### And '(forall a. a) ->

And '(forall a. a) -> (forall a. a)' is also a perfectly normal type for an identity function

That doesn't sound right to me. There is no guarantee that the returned type is the same as the accepted type, which is the point of an identity function.

### Yes there is - it's (forall

Yes there is - it's (forall a. a). The parens are significant.

### I don't get it. Where is the

I don't get it. Where is the connection between the first forall and the second forall? You imply the parens somehow establish this connection. To me, the above looks like 'forall a b. a → b'.

Due to scoping of the type variables, that seems a very different type from 'forall a . a â†’ a'

### Cripes.

Cripes.

(forall a. a) -> (forall a. a) is an instance of the type forall a. a -> a (admittedly requiring a touch of impredicativity to actually instantiate it that way, but requiring none to simply declare it that way from the get go). Just replace 'a' in the latter with '(forall a. a -> a)'

In response to naasking's claim that
(forall a. a) -> (forall a. a) is equivalent to
forall a b. a -> b
Again, as Philippa said, the parentheses are important and no one said anything about the two foralls being connected somehow. The . notation means: open parentheses and close them as far right as possible, just like it does in λx.x x. I can even point you at a paper that uses stuff like 1 * . 3 + 5. So the latter is forall a b. (a -> b) Draw an AST for each

Arrow (Forall "a" (Var "a")) (Forall "a" (Var "a")) v.
Forall "a" (Forall "b" (Arrow (Var "a") (Var "b")))

There is an equivalence that makes
(forall a. a) -> (forall b. b) -- alpha renaming for clarity
equivalent to
forall b. (forall a. a) -> b
but there is nothing that lets you pull that first forall out. Indeed, in classical logic, the predicate is equivalent to
forall b. exists a. a -> b
which is rather different than forall b. forall a. a -> b

### Cripes?

(forall a. a) -> (forall a. a) is an instance of the type forall a. a -> a

Sure, no problem there.

no one said anything about the two foralls being connected somehow

Except when marco said

'(forall a. a) -> (forall a. a)' is also a perfectly normal type for an identity function

Perhaps I'm stupid enough to deserve a cripes, but how does that follow?

### To answer my own cripes

Okay, I think I get it. I misread what marco was saying. Marco was saying that

id :: forall a . a -> a

has many instantiations id[x], including id[Int], id[String] and

id[q] :: q -> q
where q = (forall a . a)

Not that (forall a . a) -> (forall a . a) is *THE* type of the polymorphic identity function. Sorry for the confusion. Carry on.

### That's how I read marco's

That's how I read marco's statement as well. Thanks for clarifying.

### Argument by Pierce?

I, well, just don't agree. I think a type like that should be rejected.

### I agree

If you really need an empty type, use a data type without constructors.

### Prenex intuitionistic logic is decidable

Degtyarev & Voronkov (1996). Decidability Problems for the Prenex Fragment of Intuitionistic Logic. LICS '96, IEEE Press.

I think marco has some hard work ahead if he wants to explore his idea and faces a high risk of disappointment, but so it is with scholarship, and this result looks like good news.

### Ha!

And which marco is not going to do. ;-P (A) Too much work (B) I'm okay with what I know now.

Knowing that you may computationally derive whether a placement for quantifiers is valid is a nice to have, but there probably isn't a good 'general' quantifier since there (very probably) is no strategy which places 'forall/exists' quantifiers the right way for all cases.

[Oops, .. cut some bull ..]

### Note that our simple

Note that our simple programmer is not that different from a Haskell language designer which shifts meaning of a forall quantifier dependent on where it occurs in a program.

Woah there: if you compare the pre-GADT syntax with the post-GADT where declarations, you'll see that there's no such thing happening - rather, the pre-GADT syntax is a little irregular and ugly because it's an extension of a syntax that wasn't originally intended to include quantifiers. Haskell encodes existentials in terms of universals.

### I just realized that

And cut it away, sorry for that.

### Back to the original question - and let's drop it

I know it [probably] doesn't work out, I tried it years ago. I was just wondering whether other tried similar approaches.

I will stand by this though:

It is unnatural to be able to denote uninhabitable types, and that should be avoided from a programming language perspective.

(Although I do guess this has same effect on most people as a German tractate (is that english?) from the start of the 20th century I read which stated that Boolean logic doesn't do justice to human thought and should be forbidden.)

### Too much work

I don't quite agree when you say (later): It is unnatural to be able to denote uninhabitable types, and that should be avoided from a programming language perspective, but some people find uninhabitable types and absurd propositions counterintuitive, and I think there may be some value in trying to figure out a scheme that avoids them.

In fact it is easy: don't have parametric polymorphism! Because of the point about type variables, I think that if you have implicit forall quantification, you necessarily have uninhabitable types featuring somewhere in your PL's ontology. Maybe this conceptual issue is a reason why many programmers don't take to HM type theories. But it would be nice to have something in the way of coherent restrictions, something a bit like the ideas you sketched, that give much of the power of parametricity with very limited possibilities for uninhabitable types.

Needless to say, there is plenty of correspondence between this question, and old questions in logic. It has got me thinking...

### It's just a silly observation. A metafor and maybe a solution?

As I stated before, there were a number of early 20th century philosophers which objected to the boolean formalization of human thinking.

One of their objections was on the meaning of "->". Normally, people will reject any statement which makes a conclusion out of an absurdity. Like,

"the moon is made out of cream cheese -> everybody is a millionaire"

is a statement normal people just plainly reject (it is false). Where a logician would say that statement is true.

They tried to find a better meaning for "->" and of course failed, since we now all know that any truth valued logic just has some restrictions.

(Still, if you want to know why it is hard to write a program which passes the Turing test, you just need to read some of those articles. I.e. closed world reasoning, reasoning by abduction, etc. all restrictions of formal logic make human traits difficult to formalize.)

Something else:

1. For logicians types are propositions and terms are proofs.

2. For programmers types are approximations of computations and terms are computations.

Now, I did some language design and somehow was confronted with the same problems. I.e., look at the following (I assume valid) examples

bot: (forall a . a)
bot = bot


id_bot: (forall a . a) -> (forall a. a)
id_bot = \x . x


Now, these are really normal programs from a logicians view. [I really hope I got them right. ;-)]

However, my intuition wants to reject them. The first on basis of that I don't understand a [result] type which will unify with everything, and also, 'a' is never bound [to an argument], therefore I don't even want programmers to denote a type like that. The second I want to reject since it can only take a non-computation and deliver a non-computation.

So, I wondered whether these types are not what we want, and like those logicians, I wondered: did we arrive at the right meaning for the connectives and quantifiers which does justice to the programmers meaning of types?

Now, I tried some approaches, and they didn't work out (like trying to give the forall[/->] a dependent product meaning). I couldn't get it to work on the easiest of types. For example,

map: (a -> b) -> [a] -> [b]


Now, maybe the following is a solution? The problem with that type [if I try non-standard approaches] is that 'b' unifies with everything, as where we know it is the result of some type 'F a' dependent on 'a'.

Maybe functions like that should really be typed:

map: (a -> F a) -> [a] -> [F a]


So, instead of quantifying, variables may only occur on binding positions, and we make explicit which types are constructed out of other types, and unify on the type level with constructors. [In the end I guess 'F' can be understood as an explicit skolemization function][so you explicitly give a proof of the existence of 'b'][on the type level][I guess it doesn't help since 'b' can already be understood to be some 'F a']

[My approach is to solve the problem by explicitly supplying types which must approximate computations, therefore I guessed you need something which takes arguments, and returns known results.]

I think that might work... (But I do not know.) [It might not be to different from the 'forall/exists' placement trick I tried before.]

[Your article which states that the validity of prenex forms are decidable might help though. Dunno.] [Like, universally quantify types, reject all uninhabited types, and do the unification trick on the type level?]

[Whatever, seems I am stuck. Probably there isn't anything better.]

[After some tinkering, I think I got it to work. It also generalizes neatly to a somewhat cleaner HM style inferencer for a PL.]

The problem is that I want some kind of restricted polymorphism. I just don't know what kind? (Or maybe I just don't know and this problem was solved ages ago?)

[Btw. I added the logicians section above for completeness. I got from your post that you know of it.]

[And of course I want the most expressive, even 'impredicative', type system one can find ;-)]

[And I want a system where a programmer gives as little type information possible. Parametric polymorphism as I understand it (explicitly supply types) clutters a normal program too much.]

### Only implications

It occurred to me that if the type doesn't contain quantifiers, or only on the top level, than it might be almost equivalent to a SAT-2 instance, in which case a linear algorithm by Tarjan can be adapted to solve it.

### Bottom and unsafeCoerce: The Ultimate Harmfulâ„¢

A bold statement: the types (1) 'forall a . a' and (2) 'forall a b . a -> b' should not exist!
I completely agree, those types never, ever occur in a well-formed (and sufficiently strict typed) program. Thus, having them in your language only tempts you to think of ill-formed and insufficiently strictly typed solutions to your problems.

Getting rid of unsafeCoerce is quite easy, getting rid of bottom is hard.

Have a look at
http://www.cs.mdx.ac.uk/staffpages/dat/sblp1.pdf and
http://www.cs.kent.ac.uk/people/staff/dat/miranda/ctfp.pdf, they should give you some inspiration.

### Even in a total language,

Even in a total language, the empty type can be of use.

### My feeling is that your idea should work

The vague algorithm for (general vars. typeExpr) would be: for a type variable in vars for which typeExpr promises to output a value and it has no way of obtaining it generically, make it an existential quantifier, otherwise make it a universal quantifier.

This turns (general a b. ((a -> b) -> a) -> a) into (forall a b. ((a -> b) -> a) -> a), because the type expression never promises to output a value of type b.

Making the algorithm precise might not be easy, but can anybody provide a type for which you cannot tell if an output value can be obtained generically from the inputs?

### I think I already got it to work

You just need to see that '->' has a left-to-right meaning (you can use 'stuff' from the left-hand side in the right-hand side). Which is obvious since it denotes a function.

Any variable occurring in the lhs which is not in argument position is understood to be existentially quantified and introduces in an extended HM inferencer a skolem function which states how it might be constructed out of variables in the known context. This still allows uninhabited types I think.

You can also get rid of the automatic introduction of existential vars by disallowing that and introducing the skolem function in the language in types.

I guess in the HM inferencer I am trying to formalize rules like:

C |- M0: ti -> f[t0..tn]    C |- M1: f[t0..tn]
--------------------------------------------- t0..tn in C, ti in C
C |- M0 M1: f[t0..tn]


Making the left-to-right order explicit.

It gives a HM inferencer which explicitly keeps track of used type variables by the skolem terms. This might actually not be that different from also keeping a set of monomorphic variables, they just now appear as arguments to the skolem function. A type is correct if a type term is derived without unknowns. [It might be that the latter restriction is all one needs in a standard HM inferencer]

Another silly observation:
I am the one who is trying to keep a language as close to a [pure] intiutionistic view as possible. ML allowed the bottom type as a by-product of a simple HM inferencer, and fixed that by giving it a special meaning (thereby shifting to a non-standard interpretation), and I guess Haskell did the same (by allowing bot).

But it's been a long time since I did some untuitionistic logic, and most type schemes I read/understand from a pure PL perspective, so I might have gotten everything wrong. The proof is in the pudding and I am still cooking. [And it might be that I end up with something completely equivalent to a normal HM inferencer] [or something completely unusable]

[Again: I want a language without bot and possibly with impredicative types. The latter may be possible by observing which skolem terms may apply in a certain context.]

### Are the only difficulties

Are the only difficulties with higher-rank, impredicative type systems purely due to type inference? Or is there some deeper meaning to the relative scarcity of such type systems that I should be seeing? I haven't found much discussion of this issue, probably because it isn't currently very relevant to typical programming tasks.

### Going higher-rank

There are several problems with higher-rank systems: you get a problem deriving principal types, proving the equivalence of types becomes a problem, and it may be that on certain type terms you can encode intractable problems [in certain type systems].

Now, my best guess is that a lot of these problems are due to the fact that under the normal semantics for types they encode higher-order logic problems. So, I personally hope that one way of lifting the impredicativity problem is switching to a type system where, in the end, higher-order logic problems cannot be encoded. I.e. by avoiding uninhabitable [among others] types you restrict yourself to a logic where it becomes more easy to derive certain properties.

One way of doing that might be taking a hard look at types, and understanding that a type for a function 'f: X->Y' has a certain meaning [under standard semantics] "f is a function f which takes a member of any 'set' X to any other 'set' Y". Later, in the inferencer you end proving stuff about the equivalence of 'Y' w.r.t. other derived types. This is all very natural if you give types an intuitionistic higher-order logic meaning. That also has a lot to do with what the 'forall' quantifier and '->' mean/do in a type system. [Note that in intuitionistic logic, you end up proving the existence of Y by providing a function which constructs an element in that type] ['forall' quantifies over all variables, maybe by getting closer to which variables might be considered 'formal' and which variables might be considered 'applied' applications, you can lift some restrictions.]

I am trying to design a type system which doesn't have a (direct) intuitionistic logic meaning. (I really regret having posed my question in terms of intuitionistic logic, since that is, in some sense, exactly what I am trying to avoid. [The moment I am doing intuitionistic logic, is probably the moment I took a wrong turn in getting the meaning of a set of symbols right])

If you understand that 'Y' is always some type which may be constructed out of 'X' [not the intuitionistic but the/a PL meaning of ->], and maybe by providing explicit descriptions of 'Y', you might shift to a type system, where in the end, you just end up proving the equivalence of 'known'/inhabited types. [And you end up encoding tractable problems.]

I also must state that in some way I don't care too much about impredicativity. I just want a type system which avoids certain problems of normal HM inferencers. Which is not only the existence of bot (forall a.a) but also other types (forall a b. (a -> b) -> b) which 'encode' not only 'irrelevant' types, but also problematic types, since, under normal semantics, you can make them unify with normal types leading to inconsistencies in the semantics of the language. And for the latter, you always end up paying a high price.

[Note that I just want a good type system which mimics normal HM/logic based type systems. Almost the same set of symbols, preferably a similar notation, but without some other problems.]

[Oh, an example of a difficult term:

hmpf f = (f 3, f true)


What is the type of f/hmpf?

In my system I would derive (forall a. a -> F a) -> (F int, F bool), which already is better than most other stuff I've read.
] [But I don't like it, since now I am using three symbols, a forall which quantifies wrongly, an 'F' which is implicitly quantified and seems to be placed wrongly, and an '->' which has some unknown meaning w.r.t. some 'a', some 'F', and some 'forall' symbol. Grmbl.]

### HMF, HML

I pursued it a bit further, but to no avail. I have the feeling I would end up with some of Daan Leijens stuff, but, again, I am not sure. (It seems he poses some similar restrictions on types in some of his work.)

Ah well, thought I would give some references for completeness.

Flexible types: robust type inference for first-class polymorphism

HMF: Simple type inference for first-class polymorphism

[I guess I should have mentioned the latest Peyton Jones work also.]

### These were all discussed on

These were all discussed on LtU. I believe this post contains all of the relevant links.

### Skolem Types

As stated, I have a part of a system which derives for the following term

hmpf f = (f 3, f true)


the type

(forall a. a -> F a) -> (F int, F bool)


Which is good, since that actually is a better principal type than most other theories derive.

Does anyone know a theory capable of deriving similar types?

### rank-2 ala Hugs?

To me that reads equivalently to what in System Fomega would be

forall F:*->*, a:*.(a -> F a) -> (F int, F bool)

? Not that full inference is possible for System F, let alone Fomega. If so, what you've got may be related to an inference algorithm for the rank-2 fragment of System F + higher kinds, which IIRC was implemented by Hugs.

### Great

Yeah, it looks like that, I just don't really have universal quantification, but some unreadable abstraction involving capital lambda's for modelling abstractions. I want to figure out if I have an infeasable setting.

I also vaguely remember Morrow's types looks the same. Guess, I should read up on that too.

### Not rank-2

I gave up on the typing stuff (had a more pressing problem in my compiler). However...

No, it isn't rank-2. The F in rank-2 denotes a type constructor of known arity, whereas the derived F in my typing system denotes a 'constructed' type which may depend on one of the arguments. I.e. a term (F a b) unifies with types like '(a, b->b)', just 'a', or even just 'bool'.

I am not sure it is a significant difference, though.

### That's just an evaluated

That's just an evaluated type constructor, ala System Fw. Type-level lambdas and so forth.

### HOL

You might want to look at the higher-order logics at the basis of Isabelle/HOL and HOL-light which (AFAIU) handle polymorphism without introducing uninhabited types. Though in any case I don't think it does preclude writing "impossible" types either.