Constant Values as Types

This is probably one of those naive I should first finish the Pierce book kind of questions, but here goes anyway:

Are there languages where a constant value, such as 42, can act is its own type?

So putting this in real-world terms, in the Cat language I can write the following function with a type annotation:

define the_answer : ()->(int) 
{ 
  42 
}

However, it seems that I am being too general by saying it returns an int in the type annotation because it always returns 42 (edit: which we could say is a subtype of int). So ... shouldn't the type annotation be:

define the_answer : ()->(42) 
{
  42
}

So the more general question is: what is good / bad about this?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

In a dependently typed language

You could have a type Exists(x:Int)(x == 42), consisting of an integer together with a proof that that integer equals 42. The proof in this case would just be one axiom, the canonical element of the type (42 == 42).

Introducing subtyping to this sort of type system seems tricky to do without losing other desirable properties - or so I concluded from a quick read of 'Type theory and functional programming' - I'm sure one of the regulars here knows better :)

More thoughts -

I suspect that when you ask 'what is good / bad about this?', you're effectively asking 'what's good / bad about dependent types', and their pros and cons have, I think, been well-discussed in other threads here.

I say this because I suspect a full-on turing-complete theorem-proving type system with some kind of dependent types would be necessary to handle this sort of thing. But I may be wrong. Perhaps a weaker type system could still infer these sorts of types in very limited circumstances?

Dependent Types

I spent the morning reading up on dependent types, thanks for having brought this to my attention.

So AFAIU C++ templates act as a limited form of dependent types within the context of a weak type system. For example the work of Tood Veldhuizen ( http://osl.iu.edu/~tveldhui/papers/Template-Metaprograms/meta-art.html ) has apparently made good use of them.

singleton types

Dylan had these (and so I expect CLOS has them too -- I'm not a Lisper so I don't know), so you could define factorial with the following sort of multimethod:

define method factorial( n :: ) ...
define method factorial( n :: singleton(0) ) ...

There is even a bit of sugar for it in method definitions:
define method factorial( n == 0 ) ...

A dynamically-typed language obviously doesn't have to worry about any difficulties in type-checking this feature. I suspect (with no evidence whatsoever) that singleton types wouldn't cause the kind of undecidable-type-checking headache that more general kinds of dependent types do.

I think it would be undecideable

For example, to type-check a function

f : Int -> Singleton(0)

would require proving that f returns zero for every single input. For general f, this would be undecideable - see Rice's theorem.

For example, say we write the function f above so it returns 1 if its argument is an even number greater than two which doesn't exist as a sum of two primes, or 0 otherwise. If the type-checker terminates then it looks like we've settled Goldbach's conjecture!

Really I'm wondering what the proposed use-case is for types like these. What kind of properties of programs would you hope to be able to check at compile-time with these types, that can't be checked using simpler type systems? I can only see them being inferred for trivially constant functions, the sort of thing which I expect pure functional compilers can optimize away already without needing to involve the type system. But quite likely I'm missing something.

Overambitious Type Systems

For example, say we write the function f above so it returns 1 if its argument is an even number greater than two which doesn't exist as a sum of two primes, or 0 otherwise. If the type-checker terminates then it looks like we've settled Goldbach's conjecture!

I would approach this problem by having the type-checker reject any program which even suggests the possibility of returning 1.

I can only see them being inferred for trivially constant functions, the sort of thing which I expect pure functional compilers can optimize away already without needing to involve the type system. But

If I can formalize this within the type system, then I can write more interesting metaprograms.

Of course you can't always infer that a function returns a singleton value, but in these cases, you can simply infer that it returns an int. Inference is not the goal however, I simply view it as a tool.

Right

Such a type system is generally undecidable, but it can typecheck all programs that typecheck in simpler decidable type systems, plus lots of additional programs written using more powerful and expressive constructs (but not all of them!)

I'm certainly an advocate of gaining more expressive type systems at the expense of decidability.

After all, what you really care about is the following: If your compiler accepts a program successfully, then the program is type-safe. If it's not type-safe, then you might get an error message, or a stack overflow, or a core dump, or whatever -- but in no case will you see the compiler accept a program that's not type-safe.

I see what you're saying

Yes adding extra expressiveness sounds like a good idea, even if it's not decideable in the general case. I guess my issue is this (in a pure functional language, at least):

In what situation would it be useful to declare a singleton type in a function signature? If I already know when writing the program which exact value the function argument must always be, or which value the function will always return, then why bother having it as an argument or return value in the first place? It would make the type system more expressive, I just can't immediately see where that extra expressiveness would come in useful.

On the other hand, if the system can infer these types in some situations, then that could help the compiler optimize away constant functions. But like I said, I don't see the need to get the type system involved here, unless there's a practical use for the extra expressiveness. Which perhaps there is - I'm just curious if you had anything particular in mind.

Now I think about it

I can see these being useful in the context of an even-richer type system. Perhaps that's where you're coming from.

Singleton types

Singleton types are the simplest answer, so you could have the typing 42:singleton(42).

You might also want to have an uninhabited type. Call it "nothing".

You might also want to have the type of all singleton subtypes of integer. If you have the later type, you can call it option(t) or ?t, and then you have the typings singleton{32}:?int, and nothing:?int. If you support types as first-class values in a programming language, then option types, singleton types, and the uninhabited type have a serendipitous relationship, as you can use them as types, as values, and in combination for a proofs-as-programs style.

This is the approach I'm taking in my programming language R&D work.

Thanks Tim, this makes a lot

Thanks Tim, I like this approach a lot. It helps me frame my ideas with more standard nomenclature.

In general what you describe is very much like what I was planning to do with Cat. I am designing the Cat type system to be dynamic by default (everything is a variant), with types as first class values. If the language is compiled then the the compiler infers where values are constants (or a small set of possibilities) and precomputes as much code as possible.

This precomputation phase effectively performs static type-checking. by pre-evaluating expressions involving types at compile-time.

This way the type-system is as expressive as the programming language itself.