what a type system always proves vs. what it can be made to prove

It seems to me that PL theory focuses on type systems with respect to what they prove about all well-typed programs. It seems to me that PL theory does not focus on type systems with respect to what they can be made to prove about some well-typed programs.

Is my perception of this focus correct, and, if so, is this a good state of affairs?

I am open to the idea that the answer is "yes, that is PL theory's focus, and that is a good state of affairs since the rest is pragmatics, i.e. software engineering, not PL theory." But I guess I wouldn't have asked the question if I didn't have suspicions to the contrary.

Another way of asking it is, does PL theory treat type systems primarily as a way language designers help programmers, and only secondarily as a way language designers help programmers help themselves? Or does it treat neither role as primary? Or are these roles inseparable?

Comments?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I am really unsure I

I am really unsure I understand what you are trying to say. Perhaps if you are more concrete it would be easier to know.

But if I understand you correctly, what you are asking about is the expressiveness of type systems. I think this is often discussed, even if this goal is hidden when you read theoretical papers that focus on proofs of type system properties.

You should search for "typeful programming", and check out languages with expressive type systems, from Haskell in the FP world to Ada in the imperative.

thanks

Thanks for your response. I will do some searches for "type system expressiveness", "typeful programming", etc.

The discussions of expressiveness I have run across before seemed to focus on "false negatives", i.e. to what extent does a type system reject programs that, given an untyped interpretation, would not actually go wrong at runtime in the way the type checker "feared" they would. But probably I am misremembering or there are other broader discussions of expressiveness that I need to find.

Rereading my original post, I see now that it is a bit vague. Perhaps it would be more concrete to say that what I'm interested in is to what extent does a type system help me prove my program will not "go wrong" in application-specific ways (e.g. units checking in a physics program) as opposed to application-generic ways (e.g. null dereference). The point being that you can write a physics program using only "double" type or you can impose your own types on those doubles to make the type system prove stronger things about your program for you.

Maybe this is helpful....

Suppose we have an untyped programming language, and then impose a type discipline on it. So, now we will reject certain programs as being ill-typed, for whatever reasons the type system designer had in mind. That's clear, but now ask, what else happens?

Well, a surprising fact is that the imposition of a type discipline changes the language's natural notion of program equivalence. Intuitively, two program modules are "contextually equivalent" when all client programs that could use them ("contexts" in PL jargon) have equivalent behavior when you switch between the module implementations.

Why? This is because client contexts are themselves programs, and so there are fewer contexts in the typed world than the untyped. This is the underlying mechanism that makes type abstraction work: we use the type discipline to ensure that there are no client programs that can "peek under the covers".

As an example, consider the following ML program:

module type EVEN =
sig
   type even

   inject  : int -> even option
   outject : even -> int
end

module Even : EVEN = 
sig 
   type even = int
  
   let inject n = if n mod 2 = 0 then Some n else None
   let outject n = n     
end

Because the type Even.even is held abstract, we know with certainty that whenever v is a value of type Even.even, then calls of the form (Even.outject v) mod 2 will always return 0.

Is this the kind of thing you were asking about?

Exactly

First, your recollection is correct: static typing is necessarily conservative over execution because the entirety of the dynamic semantics of the program is not available at compile time except in a handful of extreme, trivial cases. This is the chief critique of static typing from dynamic typing aficionados that has any "bite," in my opinion, although in practice I find the observation trivial. If you have a Turing-complete dependent type system, the ε between what you can express at compile time and what you can't becomes infinitesimally small, and even short of that you can get vastly more helpful static guarantees out of, e.g. GADTs in Haskell or Scala, modules and phantom types in ML, etc. than most people are aware of.

It's precisely in this context—the context of relatively rich type systems like those of OCaml, Haskell, Scala, etc.—that we find the beginning of the answer to the latter part of your question. It's one thing to say "I have static typing with type inference, so I don't have to spell everything out but the compiler keeps me from making boneheaded mistakes," and here I think dynamic typing aficionados are essentially correct in saying "big deal; I get that much out of my unit tests," assuming good test coverage. An important line gets crossed, though, in making the transition to "typeful programming," where you, the programmer, express important program invariants using the type system, so that your program will not compile if you should happen to violate one or more of those invariants. Using types to implement proper units in numerical code is indeed an excellent example of this. This thread goes more deeply into the idea of sealing off important invariants in a module and having the type system then guarantee that these invariants hold throughout the program.

Domain specific type systems

I may be misunderstanding the questions, but it almost sounds like benckla is asking about research into what I'll call "domain specific type systems." I.e. most type systems that are researched give you some fairly general concepts for preventing a fairly broad range of problems and leave it up to a programmer to map their domain into the type system. But just as languages can be targeted to domains, are there DSLs with type systems optimized for fairly specific areas? An example might be a physics based DSL with a static type system that natively understands units so it can statically prevent adding centimeters to ergs without forcing the developer to figure out how to make it all hang together.

To answer the question broadly, I think most PLT research into type systems is naturally going to be more general in nature but such research may often be motivated by such specific problems. So expect that while there are going to be DSLs with type system optimized for their domain, it's unlikely that you'll find much core research into the subject of designing such systems. That does seem to fall more on the SE side of the world rather than the PLT side.

progress report

I just read Cardelli's "Typeful Programming." Undoubtedly the rich type system of the language it presents (Quest) is capable of expressing the kind of application-specific constraints I am interested in. And, when the introduction identifies typeful programming with "the widespread use of type information," I'm guessing/hoping that by "widespread" he means not just that which is imposed by the language but also that which is imposed by the programmer and only enabled by the language. Also the interpretations of types as partial program specifications is encouraging.

But, to me, the paper doesn't really pursue this idea. It presents the language, which is very cool and thought provoking, but it only briefly touches on the typeful programming style that the language enables. When it does touch on this topic it is mainly with respect to the module, interface, and system constructs.

So, unless I'm really missing something, I have to say I disagree with its conclusion that it has "illustrated a style of programming based on the use of rich type systems." It has presented a language that enables that style, and perhaps given some hints as to what that style might be like, but it has not illustrated that style, unless there is some major reading between the lines expected about how the style flows naturally/inevitably from the language.

I realize that Cardelli is a huge player in the field and who am I to criticize him, etc. But I don't mean it that way. No doubt this is a great paper. It's just not about what I wanted it to be about, and not what I thought it was going to be about from reading the introduction and conclusion. I must say I find this to be the case disturbingly often. Either authors habitually claim more than they actually deliver, or I read abstracts and introductions through the rose-colored glasses of what I hope the paper will be about. Or some combination of the two.

Over-precise types boomerang?

Benjamin Pierce gave a talk (discussed here) that sounds like it relates closely to your distinction between imposed versus enabled by a language.

I am a bit baffled and I am

I am a bit baffled and I am perhaps simply again not understanding what it is you are after. Do you have any experience programming in Haskell or, alternatively, Ada? If so, have you also read examples of good code in these languages, or style manuals or textbooks? It seems to me that one of the main things that are emphasized is how to use the expressive type systems (imperfect as they are) to encode domain and program specific invariants, in order to assist design, programming, debugging, reuse etc. etc. At least that's what I tried to show students when I taught software engineering with Ada.

Or am I missing the point entirely?

P.S

Here is an ancient riddle I posed students that is a short illustration of what I have in mind.

I think you get it

Thanks Ehud, I think you do "get" what I am talking about except for one little twist: my original question was not "where can I find discussions of typeful programming," it was more like "where can I find discussions of typeful programming *in PL theory*, or are such discussions only to be found in software engineering?"

Here's yet another way of putting my question. "It seems to me that PL theory focuses on type systems with respect to where they draw the lines in the classic quadrants of any test: false positives, etc. What about what a type system enables within the "true positive" quadrant? Is that just software engineering? Or am I just ignorant of its discussions in PL theory?

I'm not Ehud...

...but my $.02 here is that it's difficult to talk about this from a perspective that is simultaneously PLT-based (that is, giving a detailed description of language and type system design properties that lend themselves to typeful programming) and pragmatic (that is, making the connection to solving specific problems, e.g. statically disallowing write operations to read-only files, or statically disallowing any access at all to a file that isn't open [in the first place|anymore]). I continue to claim that the most accessible such writing is that of Oleg Kiselyov and Chung-chieh Shan on Lightweight static capabilities, which are all about selecting domain-specific (pragmatic, non-PLT-founded) invariants, hiding them behind an inviolable abstraction barrier (PLT/type theory/module system), and extending their guarantees to the rest of the code. It's partly to make a point about what can be done even with the type and module systems we have today, but also to demonstrate why you might want an even more powerful type system, e.g. with explicit support for dependent types. Well worth gaining mastery of, IMHO.

OK then ;-) There are all

OK then ;-)

There are all kinds of projects that explicitly attempt to make type systems more expressive (Paul is our resident expert on those). I'd mention a more general issue: the goal you are after is part of the context and background of the research. It is why a type system (or a type system feature) is being designed, proposed or analyzed. But most papers are not about the intuitive design process but rather on the provable properties of the resulting system. So when you read the papers you need to look for the motivation, come to seminars and conferences, or ask people in the community about why the type systems are of interest for expressing application-level invariants. These elements are usually not discussed at length in papers (be on the lookout for the "related work" and "further research" sections which often provide important clues...)