Minimally typed programs?

I wonder whether there is work on the idea of minimally typing a program. Assume that I have a program and a property I wish it satisfies. Is there a way to derive the least amount of typing to apply to the program such this property is satisfied?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

One problem with that....

What you're asking is what is "the" smallest set Mset of type information needed to prove some particular property Pprop?

One problem you have with that idea is that for any single specific property Pprop, there are probably dozens of different ways to "minimally type" a program to achieve proof of it.

To see what I mean, consider all the different possible proofs of something that prove it given as axioms different subsets of the full set of facts. For example, for some right triangle ABC whose hypotenuse is BC, you can prove the length of BC given any of the following sets of information:

1) The coordinates of B and C.
2) The angle at B and the length of AB.
3) The angle at C and the length of AB.
4) The angle at B and the length of CA.
5) The angle at C and the length of CA.
6) The length of AB and the length of CA.

So "the" minimal set of facts that allows you to derive the length of BC has two elements.... But you can't pick one of these as being more "minimal" than the others.

It's going to be the same way with typing programs. Whatever the smallest amount of type information needed to prove some property is, there are likely to be dozens of different combinations of information that you could use to do it.


Kolmogorov and Curry-Howard

It seems like one should be able to leverage the Curry-Howard Correspondence and the uncomputability of Kolmogorov complexity to show that you can not in general verify the minimality of a given typing for a program. Exercise left to the reader.

You could, perhaps.

I think you could do something like that... but the exercise you've left to the reader is an interesting research problem in itself, and would make an interesting paper to publish. Also, I don't expect that it would help the Original Poster.

Something else that interests me is that type specification spaces for a given program (ie, what type information can be specified in a program) have many local minima with respect to any given property -- that is, sets of type information given which you can prove the property, having no proper subsets that support such proof. And of course some of these local minima will be sets having cardinality lower (better "minimality") than others.

I have a mathematical intuition that it is likely that the cardinality of distinct "local minima sets" in type specification spaces have their strongest mode at the lowerbound - ie, that in any nontrivial program's type specification space considered with respect to any particular property, more such local minima sets have the lowest cardinality than have any other cardinality. But right now I can think of no valid approaches to proving it. Rigorously stating the proposition (and providing a rigorous definition of "nontrivial program" in this context) would be the first challenging step.



It's more in the topic of formal semantics and verification rather than type theory but Dijkstra's work attacks this directly:

Connection to types is non-trivial

That was my first thought as well, to use weakest precondition. But that would probably require contract types or another (similarly "uncommon") type system.

What is type

I have been trying to figure out what I want to use types for in my language. Reading the book Programming Language Pragmatics (ISBN-10: 0123745144) has given me some insights into how other languages view types. The common denominator seems to be to offer some safety belts to the programmer. Whether it also should improve code documentation and other related effects seems to be different depending on language.

One idea I've had is to use a foundation of fundamental types, where I define fundamental as "totally incompatible", or at least not trivially compatible... (Funny how all those exceptions keeps creeping on you when you try to design a clean, consistent language). Examples of such fundamental types would be function, type (if generic type recursion is allowed), values, characters, label (not the "destination for goto" kind of label, but rather an entry in an enumerator where enumerators are never assigned specific values, only orders).

Ontop of those types, I would put restrictions (e.g. valid values), attributes (e.g. unit name, can be ordered) and ways of combining them. On this level, types would be compatible as long as type casts are safe, i.e. no information lost.

Then comes the interesting question: Should two types be compatible even though it is totally safe to convert them? What if I declare two structures containing a name, address and age. One of them is called school, and one student. Are they compatible? This problem actually comes from the above mentioned book.

I thought maybe "unit name" could solve this. My intention with "unit name" is to be able to say "even though their types might seem compatible, the conversion is not trivial anyway". The idea comes from physics, where a value could have the unit "newton" or "gram", and thereby indicating that while values by themselves are interchangeable, values with different units are not. I.e. 5 is not compatible with 5, if the first 5 indicates a force in newton, while the second five indicates weight in gram.

I'm still struggling with what I want the types to be there for...

But now I guess I've strayed away from the original topic just for the chance to explain my current headache, but maybe it gives some interesting things for thought anyway.

Type is intention

I think type systems generally give intentions to programs so that program-manipulating processes have something tangible to preserve or pursue. Consider what kinds of program-manipulating processes you expect to support. As long as you end up finding all the solutions you need, who cares if there's a type system?

As for me, I often worry about how a community of programmers could collaborate to develop interdependent modules, but if they encode their design intentions as explicit types at the module edges, it's possible to verify the modules conform to each other's future-proof interfaces. And i'd eagerly use Curry-Howard to express mathematical proofs through a type system... if only I could choose a favorite logical framework.

Other features people look for in type systems include implementation inference (e.g. Haskell's type classes) and, of course, safety belts. For most of my goals these would just be luxuries, but the tables turn when I think about building drama managers as program-generating AIs.

Using interfaces to check the definitions of types.

Type definitions have some structure that I think we need to pay attention to. It's not enough for something to have the same "fields" or "methods" by name, but it's also needful that those fields or methods encode "the same" or at least "compatible" things.

An "interface" or set of defined-as-compatible methods, needs to be associated with some codified notion of what that interface means.

So if you're implementing the addition interface on some type, then the interface code needs to make sure that whatever you're defining actually is commutative, associative, distributive, and has an identity element. Overloading "+" with some definition that fails the tests defined in the interface, is and should be treated as an error.

This means that defining an interface means writing code that checks to ensure that the methods actually do have whatever behavior that interface purports to encapsulate, and implementing an interface means writing code that passes those checks.

Likewise, defining a set of compatible types means defining what interfaces such types must implement. If a university and a student both implement the "name" and "address" interfaces, then entities of either type are valid arguments to any routine whose typing requires only the "name" and "address" interfaces.

Now suppose someone creates an "enroll" interface and adds it to the "student" type. The interface code checks that the methods for recording enrollments won't cause schedule conflicts. The methods add the student to a particular session of a particular subject taught by a particular teacher.

Later, someone else tries to implement an "enroll" method for a university. But it should get rejected as an error because its methods will cheerfully allow multiple classes to be taught in different locations at the same time. The problem is that the new method has different semantics with respect to the university from the semantics already established for enrolling with respect to the student. IOW, the new method is incompatible with the interface, and is trying to capture a different idea. So he needs to create a new interface such as AddToClassRollCall or something and write to that instead (actually the 'enroll' method requires one argument having the first interface and one having the second).

The idea here is that if the interface code makes basic checks on the validity or properties of the methods defined, then methods that mean different things but accidentally have the same names should not result in the bug-causing illusion that the same interface is being implemented.

An alternative formulation of the idea is that a type is defined as a set of interfaces (identified by semantics) not as a set of fields and methods (identified by names).


I like the idea that values

I like the idea that values are just a variant of expressions, and the programming code consists only of expressions. This helps making language constructions compositional.

Since values are expressions, this means that if values has a type then expressions has a type. This could be regarded as the type of the return value of the expressions, this would work for values as well as more complex expressions.

Since I've done quite a good deal of hardware related programming, I'd like hardware stuff to be supported by the language as well. This leads me to wanting to describe things like alignment requirements, bit representation (UTF8, IEEE double, etc), valid physical/virtual address range, valid access sizes, etc.

The question is then how that stuff maps to expression. Could an expression describe hardware I/O? Would the type then be applicable on the expression, even if it is a complex one? I'd think that if I could make the answer to become yes I'd get a very useful language for hardware hacking. But the questions is that whether the type then really describe what is returned by the expression. My guess is that the hardware related descriptions are more useful if they rather describe the behavior of the expression rather than its return value. Not sure about this though... Maybe needs both.

With this view on types this is not just intention. Actually, intention is just a small part of type. Maybe this is an indication that I put too much into types?

Just a suggestion

With this view on types this is not just intention. Actually, intention is just a small part of type. Maybe this is an indication that I put too much into types?

Oops. You were already trying to decide from a blank slate, so I just meant to offer "type is intention" as a potential tiebreaker, something that might stifle your creativity enough to put your thoughts in line with existing technology. (And yet I'm inexperienced here myself....)

The definition I gave for "type" is very informal and breaks down in corner cases: In type inference, the expression is held constant while the type information is manipulated. That makes the expression "something tangible to preserve." But does it help to call it a type...?

My guess is that the hardware related descriptions are more useful if they rather describe the behavior of the expression rather than its return value. Not sure about this though... Maybe needs both.

If you look up "effect typing," you might find it corresponds pretty closely with some of your type system thoughts. Unfortunately, if you're trying to specify types that are polymorphic in bit width or which need explicit deallocation, you might find yourself wanting dependent types or linear types on top of that. This is a rabbit hole I'm trying to navigate right now, though without such a focus on binary encoding.

Rust appears to be a close-to-the-metal language which already uses linear typing techniques to reason about stack-allocated values. It could be good to look at or build from.

Alternately, what you describe could be thought of as staged programming, where you have a program which generates another program. The hardware-related details might be part of the generator's result value or its result type, depending on when you need to know them by.

After googling on effect

After googling on effect types for a couple of minutes, it seems like this is what I had included into my vision of types. I was thinking of types to perform:
- Indication of intention, as others has mentioned before, used for enforcing sanity checks
- Indication whether any conversion of the stored value is needed
- Provide optimization hints

Effect types might be included in all this. Whether it is a good idea or not is another story...

Seems like types looks simple on the abstract level, but gets messy when you try to be practical about it.

Thanks for the hint on effect typing! Quite interesting reading.

Constraints and restrictions, as opposed to type.

I would say that any sort of hardware operation is necessarily an implementation of a more general operation constrained by the restrictions of the hardware. For example, I/O operations of a particular width are necessarily constrained implementations of a more general I/O operation (of any width).

The hardware restriction of bus width available results in a constraint on the implementation (a particular routine is only a valid implementation of I/O with respect to I/O operations of a particular bit size).

So in this case I/O is the "intent" part of the type, and a particular case of I/O compatible with some hardware is just a constrained implementation of that intent.

I think constraints and restrictions are ideas separate from but closely related to type. Historically we've conflated them because there really aren't any truly unconstrained implementations of the very basic types available and because we've been able to reason usefully about the restricted implementations of our types. But they aren't the same idea. And making much more progress in type theory, I think, will depend to some extent on breaking them apart and reasoning separately about them.

Constaints and restrictions, being part of type?

The thing is that I see types as being constraints and restrictions used to describe an intent.

Imagine the following type system:
- Value - Integer, real or complex values
- Integer - A discrete value
- Real - Any value that are not complex
- Complex - Any value having a real part and an imaginary part

It is easy to see that "3" could be value, integer, real or complex. This means that casting this value to any other type is quite safe. "3.5" can be cast to real or complex safely, and with rounding can also be casted to integer. And so on.

The way I see it is that integer, real and complex are a more restricted version of value. Real is a more restricted version of complex. Integer is a more restricted version of real and complex and so on.

You could add more things into the mix, like "unsigned", valid value range etc and the fact this is restrictions becomes even more apparent. All this is typically seen as being part of type.

Given above example, how do you separate constraints and restrictions from types?

Quite simply, actually.

In this example, the type of each thing you've named is "number." And I mean numbers as mathematicians know them, not as we usually see them in computer science: mostly irrational and having exact values without roundoffs or overflows.

"real" is a word we use to tell a system to use a set of defined operations on numbers which are restricted to the case of numbers having no imaginary part. "integer" likewise is a word we use to tell a system to use a set of operations on numbers restricted to the case of numbers having no imaginary or fractional part.

3.5 and 3, like all values, can be "cast" to any type whose restrictions those values satisfy.

The operations on these types are both operations on a restricted subset of numeric values, and operations constrained by the hardware limitations of the implementation -- producing accurate results only in a limited range in the case of integers, or producing results both of only limited accuracy and in a limited range in the case of reals.

The conventional view is to see "types" as being defined in terms of these restrictions and constraints. But usually we do so without thinking closely *about* these restrictions and constraints, and in particular about how each set of operations differs from operations on the base type of "numbers".

But I'm working on inter-language code translation, and a problem that I keep running into is that the restrictions and constraints on related or even same-named types are subtly different from language to language. This is particularly true of the semantics of expressions with variables of several different numeric types.

In order to accurately translate code without preserving (or introducing) unnecessary restrictions, I have to know exactly *which* restrictions and constraints are important to the program and which were just incidental and can be dropped. That's a much more fine-grained question than just knowing which set of restricted operations were chosen to implement it.

So, I wind up looking at "numbers" as the base from which all these different numeric types are derived, and systematically documenting each place where there is a difference in semantics between numbers and numbers-as-represented-by-type, in source and target languages, before translation starts. Having "numbers" as the base type just simplifies this process. It makes for an easier methodology than basing the comparison on some arbitrarily-chosen set of language types. Then I have to actually instrument and run the code to figure out which differences (in the source types) are genuinely important, as opposed to which would merely obfuscate and impede the resulting translation, and which differences (in the target types) are tolerable and consistent with the use of them made by the program.

For example "unsigned" isn't available in lots of languages, and it becomes very important in translating to know whether "unsigned" is used merely because there is no need to consider negative values (because its constraints are unviolated), whether it was used because there *is* a need to consider numbers higher than the MAXINT of the corresponding signed type (to avoid a constraint on another type, which may or may not exist in the target language) or whether it was used because the programmer is deliberately invoking operations that deviate from the behavior of operations on numbers (relying on the exact values of the results of underflow at zero or overflow at UMAXINT, or the exact set of promotion/coercion/cast rules that happen in the source language when "unsigned" is added to or multiplied by "real").

One word. Three different possible intents, or more if you count combinations or subdivide the last into parts. In order to accurately translate the code respecting the programmer's intent, it becomes necessary to instrument and profile the code logging the value ranges used and cases of overflow and underflow (and cases where related types in the source and target languages would overflow or underflow or invoke different expression-evaluation rules producing different results, and whether the results produced at such points are actually used by the program).

The objective is to avoid a situation in which translated code is obfuscated, made less maintainable, and made less efficient by needlessly simulating restrictions on operations from the source system, while *not* breaking the translated code by introducing restrictions from the target system that will cause bugs.

So, yes, I find myself thinking pretty hard about the restrictions and constraints, and specifically about how each set of constrained operations differs from the ideal of operations on "numbers."

So I find myself writing code that documents variables, and sometimes individual operations, using "base" types like "number", plus positive statements of constraints pertaining to that particular variable or operation (min/max required subranges, a predicate / assertion that each value must pass, how much accuracy is required, whether oflow/uflow semantics are required, what they are if so, and above/below which specific values are they required if so, etc) rather than the shorthand of just type names.

I agree that for my example,

For my example, having "number" as the basic foundation for all values would make it easier to view the type as being something that describes constraints and restrictions.

But if I may throw in a another thing into the mix: Characters. Some languages, e.g. C, thinks characters are values as well. This means that you view how you represent the character as a bit value as well as what character is actually is. When using Unicode, things starts to get complicated when you view things this way.

So the question is: Should type include how the bit-wise representation look like? What code-page/Unicode/whatever that was intended? (Maybe 7-bit ASCII?). This can be important sometimes, so it will be needed to be described somewhere. Is type the place? In this case this is not necessarily a constraint or restriction. E.g. UTF-8, UTF-16 and UTF-32 are coded differently, but in practice can be safely converted to each other without loosing information. And they actually point to the same map in the end.

the type of characters and strings is 'text'.

The 'intention' part of types has no representation semantics whatsoever unless the programmer is type punning in some way. If you treat the 'text' type as an abstraction the same way the 'number' type is an abstraction, then you don't and shouldn't need to know how it's represented in hardware.

Representations of text are constrained by the hardware and schemes for representing it on hardware (such as null-terminated strings or chained buffers or unicode encodings) and restricted by the character repertoire (unicode, ebcdic, or something else).

Type punning is when program semantics depends on the representations of entities. Code that type puns is abstraction-breaking, and therefore never stable in the long run. As a rather horrifying example, in a language that had only ANSI (ie, ASCII plus a codepage) text I/O, someone implemented a JPG to GIF translator that worked by reading and writing "characters." A later version of the language switched to unicode text, and of course the JPG-to-GIF translator exploded in a puff of logic. The need to fix this code, in environments where the particular encodings in use were not known, led to the long-overdue development of byte-oriented I/O primitives for dealing with bitstrings.

So, anyway; as a pure type, text is an abstraction. When and as a particular program's semantics are different depending on the particular representation of text being used (as it almost always is in C), it's breaking that abstraction. And yes, I treat it the same way in translation; documenting all the differences and constraints and cast semantics and expression semantics etc that make the 'text' in use in a given instance different from 'text' as a pure abstraction.

The idea that you can add text to a number is ridiculous. It doesn't mean anything. Nevertheless, in languages that treat characters (text that happens to be of a uniform short length) as having numeric values, it is possible and frequently meaningful. Conversely, if you can add text to a number, then the idea that you can't add text of different lengths to numbers is inconsistent. If text has a numeric meaning at one length, then why (thinking in terms of logic rather than in terms of representation) doesn't it have numeric meaning at other lengths?

IMO, the idea that your text is null-terminated or character-counted or represented as a chain of buffers or represented as a tree should never rise to the surface of anyone's mind except the language implementor's. The idea of character repertoires and encodings should never bear attention except in choosing how to do I/O at the system's boundaries.

Which basically amounts to "don't call something text when you want to do binary operations on it." If you want binary operations, you want bit-strings instead -- possibly with additional functions that map bit-strings to text values or map text values to bit-strings, but if so those additional functions should be explicit.

The place for those text-to-bitstring semantics, in fact, is right there at the edge of the language. In interfacing with the outside world, you have to convert text to bits and you have to convert bits to text. So when you open an I/O port, you should be picking routines to do those conversions. Particular conversion routines, of course, are where those restrictions and constraints should properly be expressed.


I realize how much C has

I realize how much C has damaged my brain, by mixing abstractions and implementations in a way that it is often difficult to know which is which... Can't say I'm surprised though, I've felt it for a long time.

For the language I intend to design, I'm thinking in terms of lists like most functional languages do, in the abstract sense. This means that the list might be be fixed in length, might be multi-dimensional, elements might be of different types and so on. It should make it interesting to implement.

Thinking in those terms, there isn't actually any difference between character and text. Character would only mean a text list with the length one, and the size is fixed.

Having a unit concept would actually tie quite well into types. The unit (as I define it in this context) is merely a name attached to the type making it unique and incompatible. I.e. if I have two integer types, one having the unit Celcius and one with unit Fahrenheit, they would actually be incompatible. The fact that they can be converted to each other is something to be implemented in a function.

As for hardware related behaviors they are probably best kept at the edges of the language. Either special language semantics or offer a way to call foreign functions. That way I can keep my own language clean, since those things are only needed for very specific occasions.

Strong type punning

The 'intention' part of types has no representation semantics whatsoever unless the programmer is type punning in some way. [...] Code that type puns is abstraction-breaking, and therefore never stable in the long run.

If the use of type punning is to provide performance guarantees (and I don't know what else it would be for), it can still be provided in a strongly typed way. It's not unnecessarily cumbersome to have a type dedicated to "Unicode 6.2 normalized UTF-8 text" if it has a useful selection of efficient operations, some of which relate it back to "byte string" and "text."

Progressive types

You might want to take a look at the Onward 2012 paper on Progressive types (, paper available). They don't answer your specific question, but a somewhat similar research question. Suppose I have a program and I want to prove, via a type system, that no runtime type errors in a set \Omega will occur; a progressive typechecker allows you to set \Omega and ignore (at compile-time) errors outside of it.


As modern type systems grow ever-richer, it can become increasingly onerous for programmers to satisfy them. However, some programs may not require the full power of the type system, while others may wish to obtain these rich guarantees incrementally. In particular, programmers may be willing to exploit the safety checks of the underlying run-time system as a substitute for some static guarantees. Progressive types give programmers this freedom, thus creating a gentler and more flexible environment for using powerful type checkers. In this paper we discuss the idea, motivate it with concrete, real-world scenarios, then show the development of a simple progressive type system and present its (progressive) soundness theorem.

My set \Omega might include, in addition to usual errors, division by zero (which is then ruled out statically). Viceversa, I might want to check security of some JavaScript programs: since JavaScript is not statically typed, I might then exclude usual type errors so that they are not checked, but include some security property which is verified via a typesystem. All these cases are subsumed by suitable choices of \Omega.

After adding to \Omega errors you want to rule out statically, your program might have some type errors, which you can fix by adding type annotations (or fixing the actual errors). But you don't need to add type annotations for other errors. Hence, in a sense, you must add just a minimum of type annotations: in fact, if there are multiple ways to annotate your program leading to different amount of annotations, depending on your choices you might end up with one solution or the other - hence, in general you only reach a local minimum. If you could automate this process, you could also run it repeatedly to increase the likelihood of getting the global minimum - but it's extremely unclear how to do the automation, other than by observing runtime behavior (hint: Shriram Krishnamurthi did such automatic generation of type annotations that in his work on JavaScript safety, if I remember correctly from the talk).

If you could automate this by type inference with principal types, maybe there would not be multiple solutions, you would always get the global minimum? But it feels like you're not gonna ever have type inference in these scenarios, so this is just extremely wild speculation.


Thanks for the pointer. This is an interesting angle.