Dependent Types for JavaScript

Dependent Types for JavaScript, by Ravi Chugh, David Herman, Ranjit Jhala:

We present Dependent JavaScript (DJS), a statically-typed dialect of the imperative, object-oriented, dynamic language. DJS supports the particularly challenging features such as run-time type-tests, higher-order functions, extensible objects, prototype inheritance, and arrays through a combination of nested refinement types, strong updates to the heap, and heap unrolling to precisely track prototype hierarchies. With our implementation of DJS, we demonstrate that the type system is expressive enough to reason about a variety of tricky idioms found in small examples drawn from several sources, including the popular book JavaScript: The Good Parts and the SunSpider benchmark suite.

Some good progress on inferring types for a very dynamic language. Explicit type declarations are placed in comments that start with "/*:".

/*: x∶Top → {ν ∣ite Num(x) Num(ν) Bool(ν)} */
function negate(x) {
    if (typeof x == "number") { return 0 - x; }
    else { return !x; }

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Impressive breadth, but certainly unsound

We expect that System !D satisfies progress and preservation theorems, but we have not yet proven them. The process will likely be tedious but not require new proof techniques. Unlike System D, which introduced the problematic nesting of syntactic types inside uninterpreted formulas, System !D does not introduce any new mechanisms in the refinement logic.

In other words: the system is certainly unsound as it stands. On one hand, the system is already interesting in its own right and it would certainly be horribly hard to make a soundness proof go through, so I understand they have not done that. On the other, I find a bit strange that there is no mention of trying to actually prove soundness in the Future Work section. I'm sure they will, and feel bad already for the crazy PhD students that will sweat over this ;-)

Can you elaborate?

After spending 15 whole minutes with this work and the previous paper, I don't have much of an understanding of the details, but it does seems interesting.

I'm curious what aspect of that quote makes you think it's unsound? Do you think it's inconsistent?

I found this statement in the original paper:

The key insight in System D is that we can use uninterpreted functions to nest types inside refinements, thereby unlocking the door to expressive SMT-based reasoning for dynamic languages.

Is this key insight the one they're backing away from in the new paper?

Are the dynamic type tags are baked into this logic? My intuition is that it would be cleaner to start with a pure refinement logic and then build tagged values and mutation on top of that, but I'm biased.

Simple: unsound unless proven otherwise

My rule of thumb is that at this level complexity, all type systems are unsound unless proven otherwise. Think about it this way: when type system researchers added weak references (a rather simple form of mutable state) to the ML type system (a rather simple form of polymorphism), they screwed up. You cannot mix strong update (a very subtle form of mutable state) plus dependent types plus all the other stuff they have without screwing up something somewhere with probability 1.

Of course, it does not mean that the system is worthless : I am ready to believe that, probably, the errors that make it unsound will be reasonably easy to correct and not influence the final result very much (for example, hopefully the examples they claim work will still work after the system is fixed). But still I wouldn't rely on the details of their work (as opposed to the big ideas) for any further formal work.

And the next bad news is that at this complexity level, pen & paper proofs are hard to trust (but still much better than no proof at all). What you ultimately want is a computer-assisted proof, and that again raise the level of complexity by an order of magnitude.

Unsoundness isn't a big

Unsoundness isn't a big deal; look at all the type systems coming online today that are unsound (Dart). I'm more worried about decidability and/or efficiency...actually, is it scalable and practical? The level of precision they are getting, the fact they aren't merging aggressively, it just feels like this won't end well.


Seriously? Using the mainstream's latest failed attempt at a type system as an excuse for giving up all basic standards?

I'd rather say that decidability, or lack thereof, is not a big deal. Complexity is more relevant.

I'd rather say that

I'd rather say that decidability, or lack thereof, is not a big deal. Complexity is more relevant.

I would agree with you.

But lack of soundness isn't a big deal, I'm interested in how the type system is useful, and it doesn't require soundness just to be useful. Type systems aren't just about checking type correctness anymore.

Funny statement

Type systems aren't just about checking type correctness anymore.

Now that's a funny statement. What else would they be doing?

Structured editing,

Structured editing, intellisense and suggestions, api documentation, typeful programming, content-based linking (via matching types) ...

If that's all...

If that's all you care about then it can be achieved just as well with doc comments(*). The only thing that makes type annotations more valuable than structured comments in that regard -- and thus defines their purpose -- is that their correctness is actually verified, at least to some degree.

(*) Except for typeful programming, which in my book is entirely about checking type correctness (of types I defined).

Typeful programming, as I

Typeful programming, as I understand it, involves using types to specify the meaning (observable behavior) of the program. Type erasure would change the meaning of the program. Common examples involve using types for dispatch decisions or metaprogramming. As I use the phrase, typeful programming is conceptually orthogonal to safety. A language with typeful programming can still support coercions; the type system may be weak, unsound, undecidable, useless for correctness. But, in practice, it would be silly to not validate and achieve whatever level of safety the system might offer (eliminate and/or warn about the 'obviously wrong' things).

Do you use a different phrase for that style of programming? I don't want to argue definitions. I'm just clarifying what I meant by typeful programming.

It might be worth recognizing that in, say, a pluggable type system, types really would be a kind of doc comment. And types aren't the only kind of annotation we could validate (tests, local symbolic proofs, license compatibility, usage assertions, etc.).

(IMO, what makes types 'valuable' compared to alternative validation models is that types are compositional: you can use them at module interfaces without digging into implementation details. This greatly lowers barriers for configurations management, incremental development processes, and code reuse.)

Typeful programming

I don't know where you got that definition from, but the only meaning I'm aware of is the one that was given by Luca Cardelli when he coined the term in the 80s. That has zero to do with dispatch or meta programming, but is the methodology of programming "to the types", and absolutely assumes a sound type system.

What you describe is rather the opposite, so I wouldn't even use the word "types" there.

Typeful programming

I got my definition from reading Luca Cardelli, IIRC the very same paper you linked. I saw that several times he emphasizes use of type information as part of the programming style and program specification. He clearly distinguishes this use from typechecking or verification. A programming style where types determine behavior makes much more use of type information and more deeply in programming style and specification than one where types are purely for checking, disentangled and eraseable. The former is, in my understanding, vastly more typeful.

I see nothing in the paper absolutely assuming a strong, sound, or static type system. (There are several shoulds, but I don't consider those anywhere near absolute.)

Anyhow, like I said: I really don't care to argue terminology. Trying to convince me won't help you any, and vice versa. If you have some other word phrase for the application of types that I earlier described as 'typeful programming', just mentally substitute it into the list answering "What else would they be doing?"


Well, I'm afraid any appropriate term I could come up with would not use the word 'type' at all, so wouldn't be an answer to my question. So in that sense, terminology is relevant here. ;)

As for Cardelli, my PDF reader tells me that "sound" appears 43 times in the article, and most of his comments probably wouldn't make any sense if soundness wasn't a self-evident requirement to him (see e.g. Section 9.3).

When answering "What else

When answering "What else would [types] be doing?", I don't see how having the word 'type' in the answer is relevant. Similarly, it seems irrelevant that 'doc comments' might do the job just as effectively. It only matters that people can use (and are using) types to do the job. I've not understood the point of any of your arguments so far.

The terminology I use for that distinction....

What you call typeful programming, I consider to be the use of "intentional typing" -- that is, where type declarations define or change the semantics of the program. "Intentional" because the programmer is using them to express "intent." Intentional Types are expressed with "type statements" or "type declarations", and have semantics.

Descriptive types don't have semantics (unless the program contains errors), and are expressed with "type annotations." They *describe* what's going on (which is the same as what would be going on if they weren't there) and can be checked for correctness, but do not change the semantics of a correct program. If the program is incorrect in a way that involves the type annotations describing something that is not actually what happens (or what can be proven to inevitably happen if they are compile-time checks) they can be used to rapidly detect and isolate errors.

Descriptive typing and intentional typing can coexist in the same language. They are different constructs and used for different things.

Descriptive typing helps document programmer's intent, can be used to prove soundness/type correctness, and describes the program's behavior, but has no semantics. It helps one hunt down bugs by stopping and reporting a problem whenever a provably wrong type statement is discovered during compilation (or in some languages only during runtime).

Intentional and Descriptive Typing

That seems a justifiable distinction. I'll give it a try in future discussions.

Soundness vs parametricity

Whether types can or can not affect the (runtime) semantics of a program depends on whether the type system is parametric. That is orthogonal to soundness.

Not exactly parametricity?

For example, the following doesn't violate parametricity, but does result in type dependence:

type Foo : Nat -> *
get : Foo n -> Nat
get f = n

Why so?

Why do you think that this does not violate parametricity?

I was apparently confused

In a constructivist system, that example is:

type Foo: Nat -> *
get: forall n:Nat. (Foo n) -> Nat
get = \n. \f. n

So there 'get' is parametric, but also type erases easily enough. In a system with a non-constructive interpretation of forall, 'get' probably isn't even definable.

[Removed some nonsense]

Let's try this again...

OK, apologies if you spent time on my other nonsense post. It sounded fine last night...

I understand why you'd view parametricity as the condition allowing type erasure. For example, in System F it gaurantees that a value in a quantified type doesn't "really" depend on the type parameter. Polymorphic types aren't intended to be general functions of a type. The type forall a. a -> a is intended capture the polymorphism present in \x. x. The fact that we have to encode the identity function as \a.\x:a.x is just plumbing to help the type system out, and parametricity is the condition that ensures that this fictitious type function serves only as plumbing and can be erased.

But what if you're in a system in which there are actual functions of types in addition to quantified types? (For example, add if: Prop -> a -> a -> a) Would that violate parametricity? I don't think it would. As long as we end up with the usual gaurantees about the inhabitants of quantified types, then I think it's fair to say we have parametricity. But type erasure wouldn't be possible in such a system.

Type erasure

In the static typing community this distinction is known as "type erasure". If the dynamic semantics of your language (how it computes) can be defined by first erasing all the type information (getting a program in an untyped calculus), then using a dynamic semantics on this untyped language, then your semantics "has type erasure".

This is the case of the formal language System F (in a sense, type erasure is even central to why strong polymorphism really works well), but also practical ML languages.

Of course, some features of programming languages are fundamentally at odds with having an erasure semantics. This is the case of much "type-directed" stuff, in particular type-directed (static) overloading resolution, or their improved descendant, the Haskell type classes, their siblings the type-directed implicits of Scala (in all these examples, typing information is used to infer code that has effect at runtime).
Erasure is "even more" broken when you have general reflection facilities such as an ambient capability to perform runtime type checks or dispatch -- when your language implementation keeps full type information along with *all* values, you're in trouble. And flaws in the type system, such as Java's covariant mutable arrays, can force you to insert runtime type checks that have this effect of breaking type erasure.

Finally, this is not a black and white situation : if you have a language for which type affects runtime behaviour, you can always translate (well-typed) programs in this language into a language that allows type erasure, by mechanically inserting the relevant type-derived information in the appropriate places. Such translations can be very insightful when they allow you to understand "how much" exactly you depend on type information (eg. type class or implicit elaboration), and in this case help with language understanding, implementation or tooling, or downright boilerplate when they only amount to propagating full ambient type information (you're compiling the problem away).

TL;DR: this is often called "type erasure" and it is a very useful design pressure to be conscious of how much type erasure exactly your language allows.

Type erasure vs parametricity

Let's be precise here: the property is called parametricity. Type erasure on the other hand is an implementation technique (or optimisation, if you will) that is enabled by parametricity.

Although to be even more pedantic, you can have type erasure even in a non-parametric language, by translating types to witness terms (see e.g. this paper, or just consider how the semantics of type classes is defined).

Is it possible to give a

If we don't erase types, is it then possible to give a simple direct dynamic semantics for type classes, or something with roughly the same functionality? Philosophically it's a bit disappointing to have to go completely outside of the framework of lambda calculus for type classes, and instead formalize it as a kind of theorem proving procedure that finds the right dictionary term to insert, completely separate from the rest of the language. One the one hand they seem similar to functions from types to values (the values being the dictionary containing the operations associated with the type class), but on the other hand the type class resolution is a compile time process that goes outside of the normal language, and thus may be difficult to cast directly into a dynamic semantics. If you have a value that depends on a type class like this:

foo : Bar a => b

and a context where it is used:

... foo ...

Then can we compute the appropriate dictionary for Bar a based on the runtime environment at this point?

Take an example like this:

square : Num a => a -> a
square x = x*x

-- or, without operator notation:

square x = mul x x

If we expand it to give an explicit type parameter:

square t x = mul x x

Then we could maybe do something like this:

square t x = (getNumDict t).mul x x

Where getNumDict t returns the right type class dictionary for type t. For example getNumDict Float returns a dictionary containing float addition, multiplication, etc.

It's certainly not that simple if you want locally scoped type classes, take the following example:

square : Num a => a -> a
square x = x*x

baz : Num a => a -> (a, Float)
baz a = (square a, square 4.0)

-- in another scope:

baz 4.0

Now the call to `square a` might need a different dictionary than `square 4.0`.

Is it possible to give a different direct semantics for type classes that does work?

What you mean by "we don't

What you mean by "we don't erase type" is that you start *synthetizing* runtime values that correspond to runtime representation of type information. So you made your runtime semantics more complex (because type-directed; not that philosophically satisfying either), or you use a type-directed preprocessing pass to inject those dynamic type representations.

Type-classes are also formulated as a type-directed preprocessing pass, which synthetizes runtime dictionaries, rather than runtime type representation. I don't think it is any more or less philosophically satisfying, in fact both approaches are essentially equi-expressible (you can construct dictionaries from runtime type information, but also define runtime type information as a specific mode of use of dictionaries, ie. Typeable). One important difference in favor of the direct synthezis of dictionaries (or implicits in general) is that it allows to reason *statically* on partial classes: if the type at hand doesn't have an Ord instance, you'll get a static error, instead of happily producing it's runtime representation and wait for getOrdDict to fail at runtime.

I'm not sure why you have a problem with the prolog-style aspect of type-directed synthesis. It is rather convenient to use, and would need to be encoded in any semantics for type classes (either as a static process or as the runtime implementation of your getDict functions). It's not "inside the framework of lambda-calculus", but neither are algebraic datatypes and pattern matching, abstract types, dynamic code loading, and they're all rather useful. Why should we rule out a neat and useful framework for type-directed code inference?

Right, I did not mean the

Right, I did not mean the above as a serious attempt, obviously it is fatally flawed.

The problem I have with the prolog style synthesis is that it looks like a type level duplication of pattern matching. Though one difference is that type classes are open pattern matching and normal pattern matching is closed. I was hoping that if you have something like this then you can unify them into one pattern matching construct.

Another problem that I now realize is mostly another separate problem is that the compile time nature of type classes doesn't seem to mix well with undecidable type checking where part of the type checking is deferred until run time. If you can't fully check types at compile time you also can't solve the constraints that come from that to get a dictionary at compile time. So I'd like to do it at run time :)

BTW, I don't feel like making semantics type directed is problematic. Actually I think it has many appealing qualities. At run time you will need some kind of metadata to interpret the values that you have. In languages like OCaml and Haskell they use an uniform value representation. That means that the metadata is encoded in the tags of the values. This also means that the representation used at run time is not exact: you can represent values at run time that would not be valid according to the compile time types. Some metadata is redundant. For example if we have a list of integers, then at compile time we have the type "int list". But at run time each int in the list is tagged as an integer separately. Maybe it is a little vague what I mean, I'll try to explain it better.

If you wrote down the run time representation of OCaml values as an algebraic data type (though you'd need a dependent type family to really write it down), then there would be some values of that type that would not correspond to any real value you could get in an OCaml program. The type is too large. If you want an *exact* representation, then instead of using some custom tagging scheme, you just tag values with their compile time type. So if "xs : int list", then you just tag the value "xs" with the tag "int list". That means that inside the list each integer can be stored directly, without any tags, because from the tag of xs it is already clear that they are ints. This scheme works very nicely even with dependent types, and this also lets you use a non-uniform value representation with dependent types. For example you can represent dynamically sized arrays as a dependent pair "(n:Nat, Array n)", where "Array n" is the type of flat arrays of length n.

So essentially instead of using such an uniform tagging scheme, the type of run time values becomes the dependent pair (t:Type, [[t]]) where Type is an algebraic data type giving all the types in the language, and [t] is the meaning function of that type. So [[.]] : Type -> Set, where Set is the type of types of the metalanguage. For example "type Type = Int | Arr Type Type" for a simple language with only integers and functions, and [[Int]] = int, and [[Arr a b]] = [[a]] -> [[b]].

If you have these run time representations of types anyways, you can also do undecidable type checking by deferring the checks to run time. For example if you have some function f with dependent type x:A -> g x, and the type checker can't prove statically (e.g. by normalization) that the body of f indeed results in a value of type g x, then you just defer the check to run time. At run time the type of the value that will be returned from f will be computed anyway, and at that point you can just compare it with g x.

Note that so far, types don't really have any run time significance like type dispatch, except that they sometimes cause the program to terminate with a type error. Yet of course controlled type dispatch like with type classes is very nice. The trouble with this scheme is that type classes no longer work, because type checking is happening partly at run time. So if you have any ideas on how to fix that, I'd love to hear it.

I hope this makes some sense.

Not quite

At run time you will need some kind of metadata to interpret the values that you have. In languages like OCaml and Haskell they use an uniform value representation.

Actually, that (your first sentence) is not entirely true. You only need that info if you have generic primitives in the VM that need to perform representation analysis. GC is the primary example requiring it. In a parametric language, no language-level operation will actually rely on that information. In particular, it should be perfectly possible to implement, say, ML with neither a uniform representation nor other runtime info, if you do not need garbage collection.

Yes, you may need only a

Yes, you may need only a subset of the information for any given thing you want to do with it. For example for GC you need to know where the pointers are, but even for something simple as passing a value around you need to know how big it is. Of course in e.g. OCaml on x64 every single value is 64 bits, that's one solution to the problem. In a way this is relying on the fact that in a processor, everything is represented by bits at the end. So you can store a float and a pointer and an integer and a tuple all in the same kind of data. Still, even in this representation if you don't have GC, you have the issue that the run time representation is "too large".

Perhaps that doesn't matter, but in any case I think it's interesting to investigate different options. Has there been any work on this, besides MLton? (which does this but stages away the type metadata so that at run time it's essentially encoded in the program counter) How about in the context of dependent types?

In a parametric language, no

In a parametric language, no language-level operation will actually rely on that information. In particular, it should be perfectly possible to implement, say, ML with neither a uniform representation nor other runtime info [...]

If your language is expressive enough, even parametric programs will require some kind of runtime info and/or uniform representation. Consider ML + polymorphic recursion.

Dense arrays

If your language has dense arrays, then polymorphic functions that manipulate them need either some runtime type information or need to be specialized.

To implement polymorphic types, please choose at least one:

* Uniform representation for all values
* Reified type information passed to polymorphic instantiations
* Specialization, at least up to machine representation

If you leave out GC or use a conservative GC, this only saves the need to distinguish between reference and non-reference word-sized values.


I don't disagree. That's already true for, say, tuples. I was assuming an implementation with monomorphisation. Though, as naasking points out correctly, that is no longer possible with more expressive forms of polymorphism, so I should have qualified my statement more carefully.

With monomorphization the

With monomorphization the metadata is still there, it has just been baked into the program itself. Monomorphization = Reified type information + Partial evaluation / staging of that type information.

I wouldn't

I wouldn't still call that "meta data at run time", though -- but I suppose it comes down to perspective and philosophy, so I won't argue.

I don't understand exactly

I don't understand exactly what you mean by doing the type-checks at runtime. I think I disagree with you on this (I have this ideological view of type erasure, on one hand, and type-checking being static on the other that I think are worthy design pressures).

I think that if a type check is hard to solve, it may be helped by some limited emission of dynamic checks (just like we sometimes use option types to use dynamic pattern-matching in lieu of fine-grained static reasoning on nullability), but the correctness of the whole business should still be, morally, explained statically.

(Note that this is another case when I'm talking about type(-checking)-directed code emission. I believe that's a very helpful thing to have in a language, as a form of program inference. And that's what I dislike your idea of relying fundamentally on runtime type information, because I don't need to have this as a primitive.)

With dependent types the

With dependent types the ultimate checking comes down to checking if one term is equal in meaning to another. However because of the halting problem it's not possible to do this check exactly, so usually what is done is to approximate this check by normalizing the terms and then doing syntactic comparison. So sometimes this will reject programs that with a more powerful check would have type checked (of course they would not type check in the sense of a particular type system that has chosen a particular check, but they would "morally" type check in the sense that such programs would never go wrong). What you can do is instead of rejecting such terms that the compile time checker can neither prove are different nor prove are the equal, is accepting such programs (with a warning) and doing the check at run time. If you do this you cannot have type erasure, since at run time you will at least need to know about the types that couldn't be checked statically. Does that sound reasonable? If not, could you explain why it clashes with your ideological views, and what are the underlying (practical or theoretical) reasons for those views?

but the correctness of the

but the correctness of the whole business should still be, morally, explained statically.

I first read this as "statistically" and agreed. Programs are complex biological systems and should be treated stochastically. Alas, this view is not very popular yet.

I hope

I hope I already died a natural death before this view gets overly popular within any circles dealing with any piece of the software stack used for airplanes, weapon systems, nuclear power plants, or voting machines, to name just a few.

You pretend as if these

You pretend as if these fields were not doing stastitics already for many of their systems. Proofs just don't go far enough, unreliability is a given, we just need to manage that.

Static Statistics

Even if software is stochastic, we should be able to explain why it converges or stabilizes on a solution wthin error tolerances, be able to understand and manage unreliability, etc. - statically.

When correctness is empirical, it tends to be fragile to changes outside the model (implementation details, integration details) and to changes within the model (extensions, tweaks, probes). Robust correctness will always have a static justification, and that justification must be maintained across changes to the software.

Already happens, for what it's worth.

In Aeronautical engineering, the correctness of software for predicting turbulence over wing forms is already measured statistically rather than statically. Not just types, either, but actual numerical RESULTS. This is a byproduct of turbulence being fundamentally a chaotic process; prediction of statistical properties is the best prediction you can make, and different models and assumptions embodied in the code of different programs, yield different statistical approximations to ever-elusive reality.

Wanna hear something even scarier? We have pretty good models of subsonic flight and reasonably good models of supersonic flight, but the best models we have for making predictions of aerodynamic properties still break horribly at the subsonic/supersonic transition point. Nobody has the vaguest idea what a new wing form is going to do at the transition, until it gets tested with real hardware. The best we can do is make intuitive guesses that it will probably act something like other wing forms to the extent that it resembles them.

As mentioned above, they

As mentioned above, they already reason statistically. Not just about the airflow, but about software correctness as well. Regulation says that an airplane gets certified if the probability of a crash is estimated at less than 10^-9 per hour. There are also regulations for bad but not a crash situations, an aircraft can still get certified if the probability of those is less than 10^-3 or 10^-6 per hour, depending on that situation [1] [2]. A big problem in making an airplane not crash is not that the code does not match the specification, but that the specification does not match the real world. You cannot, after all, formalize "the plane will not crash" in Coq. But even proving that the code matches some specification with a realistic but simplified model of the world is virtually impossible, hence they don't do that. They write a simulator, and run the control software against that simulator in many many different situations, and see if the simulated plane stays in the air. The theorem "when this software is run against that simulator, the simulated aircraft will not crash" is impossibly hard to prove. In my experience this generalizes to most software. For the same effort you can get far more confidence out of randomized tests than out of formally proving that it works, mostly because the latter is just so damn hard so in a realistic amount of time you don't get very far.

Furthermore, accidents due to aircraft software have so far all been caused by wrong requirements, not by wrong implementation. [3]

So I for one am glad that real engineers use methods that work, and ignore PL researchers on this point ;-) Of course verification can and does play a part, but it's far, far from the only story.


Necessary vs sufficient

I never claimed that static verification is sufficient. But it is absolutely necessary. And airplane manufacturers certainly do tons of it. I take the fact that there never have been crashes due to wrong implementations as proof of its effectiveness.

How do you know that static

How do you know that static verification is responsible for most (or at least a substantial) amount of the confidence in aircraft code, and what kind of static verification? Do you have a citation? Everything I could find talks an awful lot about testing and code coverage and not much at all about formal methods. Even by "software verification" they seem to mean testing, not formal methods, and where they really do mean proving that's about eliminating things like out of bounds errors and null pointer dereferences, not about high level correctness.

The way I understood it,

The way I understood it, Formal methods in avionics is a relatively new thing, and hasn't replaced much of the testing-based methodology yet. This was talking to someone from Boeing a long time ago, but I would like to know the real story myself.

Whenever someone points to safety critical fields as places where formal verification is necessary, it seems like they are assuming a lot that hasn't (yet) panned out in practice. It takes a lot of work to go from here to there; in the meantime these systems have to be built anyways.

I don't know about this. Ada

I don't know about this. Ada has been around a long time, and SPARK has been around at least 10 years, perhaps much longer. There are several small but successful companies doing verification for critical systems written in these languages. Formal methods for avionics and space systems goes back a long way. My reading of the situation is that these communities have been isolated from the PL community for far too long, leading to ignorance on both sides. Ada fell off the PL curve but forged ahead on the verification curve; vice-versa, I think PLs forged ahead on expressiveness and performance but fell off the verification curve. I think pronouncing the death of formal verification from inside the PL community is quite premature.

Do you have some links to

Do you have some links to formal verification with SPARK being used in industry? Most I could find about SPARK talks about run-time contract checking, or some kind of static analysis that is far removed from functional correctness (like "no exceptions raised" analysis). Most talks about full formal verification in the future tense, let alone an analysis on how effective formal verification is to eliminate bugs compared to other techniques like code review and testing. The only thing I could find is Tokeneer, and there it's not clear to me to what extent it was actually formally specified and verified (besides being a very small project).

Real planes use Ada (and

Real planes use Ada (and SPARK)... Search for Boeing. The AdaIC website should have some information.

Certainly, I found plenty of

Certainly, I found plenty of uses of SPARK. Thanks for the pointer.

Some second-hand experience

I have a friend who worked at Tartan Labs (a spinoff of CMU) in the early to mid-nineties, and one of their main products was a state-of-the-art Ada compiler. Starting in 1987, the US DoD mandated the use of Ada for many critical systems, and yes, they were very interested in static verification methods. The mandate lasted until 1997, when the military became more interested in saving money using off-the-shelf software and tools. I don't know specific analyses supported by Tartan's compilers, but they were clearly interested in every kind of static checking feasible.

I have two family members in

I have two family members in the avionics software validation biz, often for smaller planes (Cessna, etc.). If formal methods see much use, it isn't the software they're getting paid to certify. (I would assume some sampling bias.)

The work is apparently frustrating. They like to rant about it on occasion. Sometimes, the requirements are obviously wrong, but trying to tell this to the original developers? rarely goes down easily, and these are the guys that are paying you. Find an error? the more frustrating developers want you to hold their hands and tell (or better yet, show) them exactly how to fix it, though they're at least willing to pay. "I know more about their code than any of their developers" is a common complaint.

My impression is that a lot of code is C or C++ or (in some specialized cases, I didn't ask details) python. The certifications sought are based around DO-178B, though last I asked DO-178C is percolating through the industry. Code coverage in testing is required, but weak. Where things get troublesome is handling state, especially in integration tests.

While it doesn't have much to do with avionics, code distribution that comes with safety AND performance validation has potential to become the killer app for verification. It can simultaneously lower overheads (no need for separate VMs), improve expressiveness (typesafe composition, mashups), and raise optimization ceilings (i.e. via staging or JIT link optimizations).

Validating safety or performance alone is not nearly as useful.

Thanks this is very

Thanks this is very informative. I was starting to seriously doubt my Google skills given this thread and the lack of things I could find about formal methods in avionics.

Search for "airbus"

I don't have any secret links to offer, but startpaging for "airbus software verification", for example, brings up this presentation as first result (plus plenty of others).


Plus-one on that. My understanding is that Airbus has been interested in formal methods earlier than Boeing, so it's possible that US folks have a grimmer view than us Europeans.

In France specifically, an important historic element is the failure of the first Ariane 5 rocket, which was due to a software problem. The government mandated an expert review that involved leading computer scientists of the time, and that led to a renewed interest (and associated funding) for formal methods helping to avoid this kind of troubles in the future. (The software fault, iirc., was an int64-to-int16 cast that overflow. The relevant code came from Ariane 4 controller code, and had been thoroughly reviewed for absence of overflows and tested on Ariane 4, but under assumptions that no longer held in the Ariane 5 setting, an issue that integration testing did not catch.)

One example of formal methods that is actually used by Airbus to verify controller code is the Astrée static analyzer ( ) to rule out division by zero, overflows, etc.

The qualification "community" currently has little knowledge of formal methods and is, very understandably, quite conservative. Formal methods will not replace the testing and certification authority, but may help in applying the current recognized practices, or make them a bit less costly. One of the hope of the verified Compcert C compiler, for example, is to let avionics folks review and qualify their C code directly, instead of having the review the produced assembly code. Orthogonally, the AdaCore folks are working on automatic test generation to satisfy existing code coverage criterions -- an use of formal language semantics, automated theorem proving tools, etc., at the service of current certification practices.

Don't get me wrong, I

Don't get me wrong, I welcome formal methods to make things safer. However, if we look at what's necessary and sufficient, formal verification is neither...rather, lots of testing via simulation still must be done. Formal verification is nice and can be targeted at stupid programming mistakes, but it's role is auxiliary...and is rather a new innovation at that. Absolute binary truths in a physical system are just too few and far between.

Btw, Boeing I think is a Coverity customer, which is hardly a formal approach.

Interesting! The part on

Interesting! The part on CAVEAT for verifying the properties of individual functions as a replacement for unit tests is definitely a formal method far beyond fully automatic static analyis like null pointer dereference analysis. From watching the section on CAVEAT in the associated talk ( it seems that the most important thing stopping major use isn't a technical limitation, but regulation. That's unfortunate (though understandable given the domain). Did you find other systems, or other presentations of the same system? From that presentation it does become clear that aircraft manufacturers certainly do not "do tons of it", and the fact that there haven't been many critical implementation errors is not in a significant way because of these kinds of formal verification (at least not before that presentation). It does sound like that's the direction they want to go in though.


Short answer: no. The difficulty here is not so much erasure but inference: the whole point of type classes is that boring details of operational behaviour are inferred. And because that inference uses non-local information, it is fundamentally impossible to do via simple runtime case distinctions, at least in the general case. You can consider that a bug or a feature, but it is a nice demonstration how typing is more expressive than dynamic checks, and type classes more powerful than conventional dynamic dispatch.

Although much of the value

Although much of the value derived from type systems ultimately relies on type soundness that doesn't mean that we should ascribe all value to type soundness. A type system where you just have a type checker is much less valuable than one where you also get intellisense, a compiler that uses types for generating more efficient compiled core, type signatures in api documentation, programs whose meaning depends on types (e.g. type classes), and other things that a type system enables. So I'd say that "Type systems aren't just about checking type correctness anymore" is certainly correct. Whether it is a huge problem for a type system to be unsound depends on the degree of unsoundness. For example Java's type system is still useful even though its array handling is unsound. Of course it would be more useful if it were sound.

I dunno...

Honestly, I consider "types" to be mostly about correctness. Intelli-sense, a compiler that uses types for generating more efficient compiled code, type signatures in api documentation, etc, are among the other things that descriptive types enable, but they are properly speaking UI issues rather than type issues.

Programs whose meaning depends on types (eg, a program that would actually do different things if the type statements were changed) is slightly more dubious. Intelli-sense, type signatures, code optimization depending on type declarations, can all be extended to handle such intentional types, but when you start talking about actual program semantics, it's not clear how effectful "types" differ from other effectful programming. Ultimately the semantics of intentional types can be simulated by exactly three constructs: A function that can ask "is this value a member of this type", a function that returns a member of a particular type, and the basic if/then or switch/case statement. The rest is syntactic sugar.

Depends who you ask

In our surveys, professional developers say that they rely on unit tests several times more than types for checking correctness. When asked what types are important for, they do give 40% agreement for being important for correctness... which is the same as their ranking of types as being important for documentation.

I think Sean is a little unfair in that, if you go to a compiler developer, types are quite important for optimization. However, the way they're used and designed seems disconnected in theory and practice from where the correctness community is looking. (Compare the linked work, which relies upon a theorem prover to reason about JavaScript, to Mozilla's local type inference paper where they use a fast and simple analysis for improved JITing.)

Safe optimisations necessitate soundness

If you want to exploit types for static optimisation, while also maintaining memory safety, then soundness of the underlying type semantics is a necessary prerequisite.

Yes, that's why I thought

Yes, that's why I thought Sean was unfair. Compiler writers are a small but important audience. There is still a disconnect, however.

Well, I wasn't really

Well, I wasn't really concerned with compiler writers. In my experience, the optimizations to be had by type information are very limited since type systems are so inexpressive anyways. I'm probably still being unfair here :) I really wish Gilad Bracha posted here, he's had some interesting things to say about this in the past concerning StrongTalk.

Types can't replace unit tests (now)

So far I have yet to see a type system that can be used to replace unit tests, except the one in coq. And coq can't be used for a wide class of programs.

It is unfair to point out that professional developers rely on unit tests more than types as professional developers have to rely on unit tests for correctness validation. They don't really have anything else.

And vice versa

And, not to forget, vice versa.

Types provide the programmer

Types provide the programmer with feedback, used to find errors, used for intellisense, used for navigation, used for search, etc. Trading off soundness for expressiveness (to reason about what sound type systems normally can't) or for verbosity (to require fewer annotations) is reasonable these days.

Actually, I don't think this argument is very controversial, it just instead moves quickly into the pedantic "what is a type system" where some (many?) argue that the definition must include soundness. From a perspective of PL that thinks more about the programmer than about the math, I think the definition is much more flexible.

See above

See my comment above.

Trading off soundness for expressiveness (to reason about what sound type systems normally can't)

You probably meant "and not reason about" those things. In any case, that's an orthogonal argument.

I meant to "reason about

I meant to "reason about something less than formally."

Reasoning that is useful is not always (or even often!) sound or decidable.


Some things that cannot be inferred soundly, can be known to the programmer and described in a type annotation, and that annotation can be used to produce optimized code, as an "axiom" for further type inference, and for runtime type checks for correctness (if it were something that could be checked at compile time it would, by definition, be something that can be soundly inferred).

Is this 'sound?' No, it isn't. Fundamentally, the programmer might be wrong and we might not be able to know that before runtime, so it is unsound. Is it 'useful?' Yes, it is. The compiler may take the type statement as permission to spend more effort than it would normally spend trying to prove something, thus having potential to move error detection from runtime to compile time. If it cannot prove something, it can insert a runtime check to make sure the type statement is in fact correct at runtime. Having executed that check at runtime everything else that happens after that can legitimately presume that the type statement was true (because the halt-on-failure hasn't happened) and therefore use it to produce optimized code, to do intelli-sense, as a basis for further type inference, etc.

Guarded optimisations

I think you misunderstand. For an optimisation with a runtime check to be safe you still need to find a suitable semantics for the language types with respect to the dynamically guarded execution, and you need to know that the typing rules are sound with respect to that semantics. Depending on the details, that can be highly non-trivial.

(if it were something that could be checked at compile time it would, by definition, be something that can be soundly inferred)

Not at all. There are plenty of type systems (actually, most) for which checking is trivial but inference undecidable. The polymorphic lambda calculus (a.k.a. System F) is one of the simplest examples.

did we just misunderstand each other?

You said:

...if it were something that could be checked at compile time it would, by definition, be something that can be soundly inferred...

Not at all. There are plenty of type systems (actually, most) for which checking is trivial but inference undecidable.

Did we say the same thing in different words?

Do you mean something different, or did you think I meant something different?

If you can soundly infer type information at compile time, then you don't have to do runtime checking for that type information. That is self-evident.

For example, if I have separate types for 64-bit integers and bignums, and an addition operator which, given 64-bit integers, may return either, and I'm adding two numbers read from input without value restrictions, that's a classic example of an instance where the return type cannot be soundly inferred at compile time. So a runtime check is required, either where the addition returns or where its results are used later. As you say, checking is trivial but inference is undecidable. That's exactly the situation I was talking about.

In the above scenario the addition operator is not closed on 64-bit integers. I don't consider that to be evidence of an unsound type system; just evidence that you can't soundly infer all types.


Hm, I'm confused what you were claiming above then, since you explicitly talked about static checking.

I'm not sure I get the point of your example either. In a typed language (without dependent types), either you have separate types for those two integers, and your addition operator is overloaded -- but then its operational semantics has to be such that it always returns a fixed output type for a fixed input type. Or you are actually talking about representations here, not language-level types, then the distinction is an implementation detail and completely transparent to the (language-level) type system (whose big int type happens to be realised with a heterogeneous representation internally). Naturally, every operation has to distinguish the two possible cases then.

Types vs representations (or more generally, types vs tags) is something that is conflated frequently, esp making most of the static vs dynamic "typing" debates a fundamental misunderstanding.

Wait... there's some assumption that you're making here.

Wait. "Has to" be such that it returns a fixed output type for a fixed input type? I do not think that means what you think it means, because there are *MANY* programming languages that have primitives that return boolean #false in some cases and numbers in other cases of the same argument types, depending only on the argument values.

"Has to" here can only mean that you're thinking of a specific subset or category of languages, and I hadn't previously been considering the discussion limited to some subset.

So, say, if I have a division operator which, when given some number as the dividend and zero as the divisor, returns #DivError (a member of the error type) and when given some number as the dividend and some nonzero number as the divisor returns a quotient (a member of the number type), that violates this principle, and a runtime typecheck will be required if I can't prove at compile time that the divisor will definitely be nonzero. This isn't simply a case of representation types; numbers and errorvalues have profoundly different semantics.

Or, more subtly, if it returns an exact number if it can prove that its answer is absolutely correct and an inexact number if binary rounding took place, and semantics on exact and inexact numbers diverge because you can use exact numbers as array indices and cannot so use inexact numbers; isn't that also a type distinction that goes beyond mere representation?

What do you claim is then true or untrue about a language with that kind of operator? What principle has been violated and what category of behavior, specifically, has been abandoned? What subset of languages is it now NOT a member of?

I've been observing only that for some languages you cannot soundly infer the output types at compile time without knowing facts more specific than just the types of the inputs. Therefore, your compiler will have to emit code that performs runtime typechecks, or your runtime will have to perform those typechecks as it interprets the code.

In the post above, I was talking about the case where you have this sort of type-ambiguous operator, and the programmer cuts one branch of possibilities off by simply asserting that the result is of a particular type. Lets say the programmer, knowing something the compiler cannot prove, simply asserts that the result is an exact number rather than an inexact number. Therefore, secure in the knowledge that you would not have reached code dominated by that assertion had the assertion been false, you don't need to do runtime typechecks (at least not for that particular case) downstream of that assertion.

That's why

That's why I said "or more generally, types vs tags". Everything you mention are not types in the sense of a (static) type system. They are only "types" in the sense of "dynamic typing", which people like me rather don't call types, because from the perspective of type theory they aren't.

Concretely, in a typed (i.e., statically typed) language your division example would require assigning a return type whose domain includes the error value. There cannot be any "type-ambiguous operator". Otherwise the type system wouldn't be sound (i.e., would be inconsistent). And in the strict technical sense, a type system that isn't sound isn't a type system.

As for the programmer "asserting" something: unless he can (and has the obligation to) actually prove it to the type system, allowing a hole that like that would produce an unsound type system and, omitting the runtime checks, an unsafe language. You certainly could not ever allow something like that in JavaScript.

Okay, I see what you mean. I just don't care much about it.

Oh. So you're restricting the discussion to systems that *never* perform a runtime typecheck and always use single continuations, like, say, old-style ADA or Pascal or Algol or something.

I think any type theory that limits itself to only such languages is too restricted to deal with programming in general; it's far too specialized to be able to say anything very interesting.

Contrary to your assertion about "tags", it is not the case that failure to conform to this ideal requires representations that include type tags or similar. Polymorphic returns can be handled by typed continuations. IOW, instead of a single return address in the call frame, the call frame gives a different continuation address for each possible return type. Any routine polymorphic in return type simply calls the continuation that deals with the type it's returning. So the "type check" is performed exactly once, at the point where type information possibly diverges, and doesn't require storing type information anywhere or performing any redundant checks.

Your objective in excluding any situation that causes the program to report a type error (or, worse, tries to add an integer to a boolean or something like that) is then a matter of proving that no code which reports a type error is 'live code' in the call graph. So instead of the usual warning about 'dead code' or 'instructions cannot be reached' you'd want a warning that such code is still 'live' or 'instruction can be reached'.

I don't regard type assertions as being privileged over any other kind of assertion. If I allow the programmer to make an assertion that, say, a string starts with 'a', I have no reason to treat an assertion that something is a string and not an integer any differently. In either case, if the programmer is wrong then a runtime error results and that is exactly the behavior that the programmer asked the system to provide.

If I require proof that no type assertion can ever fail, then why shouldn't I be requiring proof that no assertion ever fails?

Anyway, suppose I require this no-runtime-failure semantics; I exclude the assert library, which excludes type assertions. I require the program to deal with every kind of return value it can get from every routine. Maybe I even restrict all operations so that they can only ever provide one return type. The program will now pass static type checking. So what? If there is live code in the program that simply commands 'halt' in some circumstances, resulting in a "normal" program exit in a case where some different information might not have done so, has anything been gained?

Type assertions vs. orindary assertions

Ordinary assertions, where a boolean expression is evaluated at run-time, are weaker in principle than type assertions, which can assert things that can not be checked at run-time. Consider the type assertion f: Nat -> Nat, which says that f(n) is a Nat for any n:Nat. On the other hand, in non-dependent type systems, you're also limited in what you can assert with the type system, so ordinary assertions are also stronger in another dimension when compared to commonly used type systems.

But generally, I think you're right that combining these two mechanisms and making ordinary assertions amenable to static verification via. type level mechanisms is a good idea. I think this will be the direction things head.

FYI, my guess is that despite the title of your newest post, you and Andreas are still not on the same page. I recommend revisiting your understanding of how your Int64/BigNum example would be handled in a static language (it seems like you're misunderstanding something, but that could just be my misunderstanding). If you have a Numerical type that can either be Int64 or BigNum, then in a static language you'd probably model that with a discriminated union (a "tagged" type). In some type systems you could have Int64 and BigNum as subtypes of this tagged union type, but code that doesn't know which case applies will dispatch on the tag.

The rule is that types define everything you know about a variable. There is no built-in way to dispatch on types or ask 'what type is this thing?'. If you want dispatch, it must be explicit in the form of a tag or an existential type.

well, in a completely static language, there are no

If the language is 'statically typed' as I understand Andreas' definition, there is NO code in the compiled program that has to resort to anything like a discriminated union to figure out what type it's dealing with.

If 'types define everything you know about a variable', then the constraints imposed by Hindley-Milner analysis require a completely inadequate formulation of types.

In a Hindley-Milner constrained type system, autoconversion from int64 to bignum would either indicate some distinction not regarded as "type" in the first place, or it would have to be done Stalin-style where the compiled program winds up containing separate code vectors for every possible combination of types and maintains consistency while avoiding the use of tags by dispatching on type for both calls and continuations.

I am resigned to the idea that inaccuracy is required in mathematics, because you cannot implement accurate mathematics without infinite storage. Therefore "number" is a type that must implicitly have errors that depend on the values rather than just the types in some operations.

If different numeric types have different semantics, then "Bignums" are distinguished from other numbers in that the only error allowed in bignum operations is out-of-memory. "Int64" additionally permits/requires some other failures of multiplication/addition accuracy when values are too large to represent in 64 bits. "Float64" or limited-width rationals adds rounding errors to the allowable mathematics failures. etc.

In such a principled numeric system, addition on bignums could only ever produce bignums, even if the result were some value easily represented in other numeric types - because you'd want to preserve the information about which arithmetic failures are permitted in further operations. Likewise addition on float64s and addition on int64s.

In my own Lispy language/testbed I'm making a distinction between bignums and standard numbers for this reason; the type equations for that particular distinction are amenable to Hindley-Milner analysis and the result types carry information about whether arithmetic errors other than out-of-memory are acceptable in further operations.

But there's a further distinction within standard numbers that is not amenable to the Hindley-Milner analysis; whether or not the number is absolutely accurate and if not then how much of the value we're confident enough of to represent. And that follows something more like contagion rules, like the int64/bignum case I gave above.

"standard" computer numeric types like int64 and float64 are accessible mainly by having the programmer declare explicitly that wider error margins and/or additional kinds of errors are permissible for purposes of some particular code. The compiler may then use the relaxed constraints as permission to use whatever representations are available on the hardware meet those reduced constraints - but might not do so if other parts of the program are not subject to the same relaxed constraints and it decides it can be more efficient by avoiding conversions than it can by allowing the inaccuracies.

The point? The point is that these are all useful knowledge about numbers and Hindley-Milner constrained static type systems cannot represent more than one tiny part of them.

Technically true

If the language is 'statically typed' as I understand Andreas' definition, there is NO code in the compiled program that has to resort to anything like a discriminated union to figure out what type it's dealing with.

That's technically true, but only because types are static. The discriminated union type is the type the code would be dealing with. Andreas didn't mention this possibility (he talked about separating BigInt and Int64 into distinct types or about making this an implementation level distinction of a higher level type, presumably Int), but it's certainly consistent with his notion of static typing and I think it's more in line with what you intend.

I think a good numerical library is important, but I'm not a fan of the dynamism here. I don't even like this discriminated union idea I'm talking about. In cases where a discriminated union type is appropriate, that's IMO usually an implementation level decision, where the semantics should simply be e.g. Natural numbers. But I haven't finished designing my own numerics stuff, so I may change my mind.

Lost in misunderstanding

There is a lot of quite fundamental confusion here, and I seem to be doing a great job at worsening it. :)

First of all, let me clarify that I don't limit type theory, nor does type theory "limit itself". Rather, all I said (or tried to say) is absolutely fundamental to what types, type systems, and type theory are. From the POV of PL, a type is a static abstraction of what values an expression could evaluate to at runtime. A type system hence requires always assigning types that capture all possible outcomes.

I'm afraid your posts jumble static semantics, dynamic semantics, and implementation concerns to an extend that is difficult to unravel in a short enough reply. So I'll keep my reply limited to your example of a potentially invalid division (where you don't want to use exceptions). There are fundamentally two interesting ways to type such a division operator:

1. As / : Int × Int → Int + Error, where + denotes the union of either an integer result or some error indicator. Here the semantics of the Int type is such that it is only inhabited by proper numeric values. When you use division, you have to do a case distinction, though, to be able to get at the actual numeric result (if any).

2. As / : Int × Int → Int, where the Int type is itself inhabited by numbers and error values. Then a program doesn't need to make a case distinction, but instead every primitive operation on Int has to make one. In particular, it would be legal to pass in error values as arguments to / itself. Obviously, this is a weaker form of typing.

Of course, you can get even weaker, e.g. with / : Value × Value → Value, where the Value type is inhabited by all language values, requiring even more dynamic case distinctions. This is more akin to dynamic languages. But in fact, in those language even function types themselves don't exist, so in reality you simply have / : Value, and Value is the only type you have at all. That's what dynamic languages do, and where the slogan "untyped = unityped" comes from.

The moral is that typing, by itself, does not prevent whatever dynamic semantics you would like to allow. But you have to define appropriate types, and a consistent semantic interpretation for those types. Different choices can imply different degrees of precision in the type system, and hence control what classes of errors will be caught statically.

There are fundamentally two

There are fundamentally two interesting ways to type such a division operator

I think there's another interesting way which I haven't seen much:

As / : Int × NonZero → Int

This forces code to insert guards before divisions rather than after.

Okay, that's just laughable.

Okay, that's just laughable.

If a programming language implementation can detect and report a type error, whether at any stage of compilation or at runtime, then the language it implements is not "untyped." Whatever the heck you mean, calling a bunch of typed languages "untyped" is both obviously wrong and probably unrelated to your meaning.

If "function types don't exist" in many dynamic languages where functions not only exist but can also detect and report type errors, and higher-order functions not only exist but also can detect and report type errors when the wrong type of function has been given to them as an argument (as do CLOS constructors in Common Lisp) then clearly you are speaking an entirely different version of English than I.

This is apparently related to the equally meaningless idea that "types don't exist" in some languages which can detect and report type errors, which I questioned you about earlier. If this is an attempt to clarify your response to that question, it has failed.

I can only conclude that you do not have any experience with untyped languages nor any notion of what the word "untyped" actually means.

In a genuinely untyped language the only kind of data that exists is raw vectors of basic information units (bits or trits or qbits, or whatever depending on the architecture), usually in some multiple of the machine's word length if it has one.

Untyped languages never detect or report type errors.

In an untyped language, there are instructions that cause the processor/s to do various things with or to the basic information units, with no regard to any abstract notion such as the "type" of the value that some human being may think those units represent, and there are no other instructions at all. Most assembly languages qualify as "untyped."

The languages you're attempting to describe as "untyped" have nothing more in common with genuinely untyped languages than the languages you're attempting to describe as "typed."

I'm pretty sure you're not thinking of assembly languages here, so you must be thinking of some obviously-wrong meaning of the word "untyped."

This is distressing, because I'm pretty sure you're a bright guy who has some fairly important idea he's trying to communicate. But words mean things. If you don't use them with the same meanings as others then you will not be able to communicate with anyone.

The canonical untyped

The canonical untyped language is the lambda calculus, but your example of machine words as the sole type satisfies this constraint too. The untyped lambda calculus pretty clearly has only one type: the lambda. So it is obvious that "untyped = unityped" here.

So Andreas is correct: real CPUs are unityped in the same way dynamic languages are unityped, they just report errors differently via CPU status flags rather than exceptions, and their core type is generally the byte.

"Dynamically typed" languages use tags in a way that looks like type discipline to enforce abstraction boundaries, and they use the same terminology, hence the confusion. Absent a type system, these are not real types though.

What a relief. I was afraid it was something important.

Now that I see what tiny corner of type theory you're talking about, I'm relieved. It doesn't matter whether you think the rest of the universe is real or not. I am reassured because I am certain that it will go on existing whether you believe in it or not.

I can reason statically before runtime about what possible values type tags can contain at runtime, in exactly the same way that people using static languages reason about what types their tagless values will contain at runtime. Using exactly the same process, FWIW. So I lose nothing by using dynamic types. QED.

Geez, after all the obfuscation and dire language about "unsoundness" and "not real types" etc it turns out that what you're on about provides no additional expressiveness, reveals no semantic weakness, and is in fact completely irrelevant to my goals anyway. No wonder I didn't get it; I was thinking it was something important!

The correspondence between

The correspondence between tags and types works fine for primitive types, but how are you going to get tags that indicate "function taking Nat to Nat" or "function taking type T to type T"? If you did have such tags, how would you ensure that they're only applied to functions that deserve them?

A type system is just a logic for reasoning about certain properties like the above.

Constructively defining

Constructively defining function types is no more difficult than constructively defining functions. You have a few primitive types, a vocabulary of constructors that can take primitive types as arguments, and your infinite universe of types is the transitive product.

Representing members of this universe with tags is also straightforward. Of these any particular program will use only some finite number, so you can construct a type table that associates the types you actually use with an id tag, and use the id tag to look up the type.

Analytically defining function types (ie, by reasoning about function definitions) is only slightly more difficult; If you know the types of the forms a function calls, you can derive from that the type of the function itself by induction and call the type constructor to get a tag for the induced type.

Eliminating runtime tags and typechecks is a useful optimization when the semantics of individual programs allow you to do it, but doesn't affect program semantics.

Inferring types at run-time

Analytically defining function types (ie, by reasoning about function definitions) is only slightly more difficult; If you know the types of the forms a function calls, you can derive from that the type of the function itself by induction and call the type constructor to get a tag for the induced type.

Sounds like you're inventing type theory. :) Note that this inductive reasoning about whether the functions you're constructing requires assigning types to the constituent pieces statically (at analysis time). That's what's meant by 'types are static'.

But I think you're glossing over some important/difficult details. How are you going to infer a type for parameters? i.e. you have a definition f(x) = expr. What function type do you give f? What types of x are admissible?

Also, suppose we have a function f (with parameters) that constructs and returns function g. When do you imagine doing this type analysis to assign a type to g? Do you have to do it for each call to f? Or is it done statically, once?

Finally, let me say that it is possible to have representatives of types around at run-time. Not all type systems allow this, but it can be useful to have multiple stages of type checking, so that new type analysis is done at run-time (e.g. to define new functions at a REPL). It's still considered static typing, though, if the new code is type checked as a closed unit.

Type theory can evidently use some inventions.

Note that this inductive reasoning about whether the functions you're constructing requires assigning types to the constituent pieces statically (at analysis time). That's what's meant by 'types are static'.

Type theory needs to be extended to runtime.

Doing things at analysis time does not imply that analysis time is separate from runtime. It's true that some analysis may be done before runtime, if you have the opportunity, but you don't always have that opportunity. If a function is called and run fifty times before analysis of it is complete, I don't have a problem with that. If the types of things it depends on are refined during runtime allowing additional information to be derived and I need to do another analysis to get whatever benefits accrue of that, I don't have a problem with that either.

If I refused to do analysis at runtime, I'd be committing myself to miss information about functions constructed (via lambdas, etc) at runtime or received (via mobile code, etc) during runtime. But that's all part of the universe that static type theorists believe doesn't exist, so you may have missed it.

Type theory needs to be

Type theory needs to be extended to runtime.

Already is in the form of staged programming and dependent types, as but two examples.

If the types of things it depends on are refined during runtime allowing additional information to be derived and I need to do another analysis to get whatever benefits accrue of that, I don't have a problem with that either.

Anything that can be derived at runtime has some semantic equivalent at compilation time. By this, I mean that you are executing the program in your head while constructing your dynamic program, so at every point a correct program has a statically verifiable state. The only complaint you can level against a particular type discipline is that it doesn't let you express the state evolution of a program in a way you would like. You are following a discipline anyway though, it's just one that a compiler can't check for you to ensure you haven't made a mistake.

If I refused to do analysis at runtime, I'd be committing myself to miss information about functions constructed (via lambdas, etc) at runtime or received (via mobile code, etc) during runtime. But that's all part of the universe that static type theorists believe doesn't exist, so you may have missed it.

There are plenty of type systems used to reason about runtime-generated values or code, like MetaOCaml. Andreas Rossberg published a series of papers on mobile code for ML modules as represented in Alice ML, for instance.

It's fine if you're not a fan of static typing, but it's also clear that you aren't familiar with the breadth of work in static typing, so making a broad claim about what type theorists do and do not believe seems like a bit of a overreach at best.

Sorry. I'm just getting impatient with this conversation.

I've responded too quickly and probably with too much heat a few times, treating people who said idiotic things as though they were being serious -- as though the "state of the art" is in fact what people have said here that it is. If in the course of doing so I have been responding to a retarded and silly idea of the state of the art, well, that is good. That confirms my impression that people have been telling me things about the state of the art which are in fact retarded and silly.

It's hard for me to take anyone seriously after they've claimed that types "aren't real" if you check them dynamically instead of statically. It's like a bad joke. A program is its semantics. The function from inputs to outputs that a program (or any part thereof) implements is what that program or subprogram means. You can express the same program or subprogram in different languages, even using different algorithms. And the same program or subprogram has the same type properties, regardless of whether they are analyzed or checked or whether in fact neither analysis nor checking happens.

For the same reason, it's hard to take anyone seriously who claims that types are a "syntactic" rather than a "semantic" property of programs. We are not trying to infer information about the text, we are trying to infer information about the meaning. The text is an irrelevant detail, and the same program could be expressed with any of an infinite number of different texts.

Finally, I can't take seriously anyone who says or implies that they believe that the type properties of individual programs are necessarily limited by the language in which they are expressed. Again, the same program or subprogram has the same type properties, regardless of how it is expressed or executed. While certain languages facilitate the construction of proofs of some properties, the proofs are not necessary for the properties to exist. The properties are there whether anyone proves it or not.

Utterances like these are clearly and blatantly wrong, to the point of being idiotic. To define things in such a way that they seem to be true is to force oneself to make wrong (or at least incomplete) definitions.

I have similar problems with partisans who claim that all analysis other than runtime tag-checking is a useless waste of time; they ignore the benefits of some decent error-detection tools and cut themselves off from worthwhile optimization strategies that can speed up programs considerably. They haven't been showing up as much in this conversation, but it's an important note; neither side has a complete picture of what is true and real and valuable.

It is my belief that type analysis analyzes/predicts exactly the same properties about programs that dynamic type checking checks during runtime. Neither is complete by itself.

If it's true that there are tools that do analysis during runtime, including re-analysis after dynamic changes in program state, and you're accepting that these are in some way part of the "static typing" universe, then congratulations on beginning to escape that dogmatic and limiting view of what types are.

When your progress has driven you far enough to make necessary some more-correct statements about what types are, as opposed to repeating the blatantly wrong statements about types I've been reacting to, I'll be ready to extend further congratulations. Until then, I think it's probably best for all of us if this conversation is regarded as being at an end.

system vs syntax?

I had assumed you read Richard P. Gabriel's paper on the clashing between two paradigms
But if not, it might help explain some of what is going on (it looks to be the exact same situation).

You have already articulated the paradigm clash between thinking of a language as a system and thinking of it as text to be manipulated.

Indeed you can do much more when you treat a language as a system rather than just text. Viewing the language as a system is a much richer way to look at it, things that are impossible to do with just text are trivial when you have the program as a thing rather than symbols.

But they really do mean what they say, no matter how unbelievable it is.
The basis of type theory is "how much can we find out about the program when we just treat it as symbols". This is a reasonable interesting question because when you prove things about mere text you don't have to understand what the text represents. When we teach multiplication, we don't make students really understand the numbers they multiply, we just teach some syntactical manipulations that give correct answers.


Type theory is trying to answer a different question then you are. But their terminology is internally consistent.

It is my belief that type

It is my belief that type analysis analyzes/predicts exactly the same properties about programs that dynamic type checking checks during runtime.

Show me a dynamically typed language that can ensure deadlock freedom, lock freedom and termination guarantees at runtime the way TyPiCal ensures via its type system (among the many other properties assured).

I don't think he's talking about dynamically typed languages

He said that the distinction between syntactic (static) type checking and runtime checks didn't make sense to him. I think this is because you can run syntactic checks after your runtime information has reduced the domain of the function. So he did his best to make it make sense.

To Ray Dillinger "static" is compile time and "dynamic" is runtime.
To you "static" is syntactic checks and "dynamic" is type tagging.

He then said that he could prove things about the program at any time he chooses to; trivially static checks could be run at runtime (staging)

Just like mustard watches are a generalization of normal watches, any static check can be delayed to runtime.

(syntactic isn't quite the right word but I don't know the best one)

He then said that he could

He then said that he could prove things about the program at any time he chooses to; trivially static checks could be run at runtime (staging)

Sure, you can run "static" type checking anytime you want, even at runtime as you say. But the type checking classified as "static", and the "type checking" performed by dynamically typed languages are not the same. The latter is a pale imitation of the former.

Instead of speaking in generalities, I suggest a specific example like TyPiCal, which I think is a perfect example of why static type checking is more powerful than the "type checking" performed by dynamically typed languages. You can certainly perform dynamic checks while a program is executing to ensure that a deadlock or livelock has not occurred, but what do you do when you've detected one? Abort? Backtrack? By that point you may have already committed certain side-effects. What sort of forward-looking dynamic analysis can you perform in a dynamically typed language to ensure this doesn't happen? What sort of analysis can ensure termination?

Ray is correct that the properties of a program are there whether or not they are checked statically, but you can't perform many of these checks dynamically. To ensure the properties really are there, you need static typing.


There is a misunderstanding between you and "the other side" in this debate: I think there is something simple we haven't properly explained to you, and you think there is something simple other's people don't understand. The difference is that you call (your bizarre interpretation of) things we said "idiotic". I'm not interested in continuing this conversation on these terms.

(I have tried to explain the non-obvious fact that the notion of "type soundness" is relative to a set of observed "errors" in this answer of mine, and I think it is the distinction that could help reconcile the two point of view.)

Sigh. Mea Culpa.

Forgive me for using the "I" word; I'm describing my experience of these utterances, whereas the utterances themselves may be merely surface features of something which probably does not merit that adjective. Aand, it was late at night and I was short on sleep and as I said, growing quite impatient with phrases like "not real types" and so on. It's hard to remain polite under these circumstances.

Obviously "checking" does not make predictions. "Checking" can only be done after the fact. Anything that comes before the fact is necessarily "Analysis."

Therefore any method I might propose that makes predictions (eg, assuring that something will or will not happen rather than assuring that something has not happened or detecting when it has happened) would necessarily be based on analysis and would therefore be considered, if I understand correctly, to be "static."

So those who have demanded to see a "dynamic checking" system that makes predictions (aka guarantees), as some have done, are at best engaging in a ruse and intend to define whatever I might respond with out of existence.

I flatly refuse to limit the inputs to analysis to just the text of the program. I will use whatever information I have available about the current state of the program as well, including information that does not become available before runtime, such as inputs. I would consider any analysis that requires such information to be "dynamic" because obviously it cannot take place in the absence of a particular program run. Others are apparently divided in opinion on what term to use, some claiming that "static" analysis limits itself to the program text alone and some apparently claiming that *ANY* analysis or prediction is necessarily "static."

At this point I'm no longer interested in using the terms "static" and "dynamic" as regards types. They apparently set off religious wars and hinder worthwhile discussion. And apparently each of us finds the other's interpretations thereof to be "Bizarre." This entire discussion reveals a "scorched earth" of terms that cannot be used because of conflicting, anti-intuitive, biased, or, as you say "Bizarre" definitions.

So let's say more simply and directly that I have the intent to do both analysis (when I need to make predictions or assurances) *and* runtime checking (of things I cannot or have not yet been able to predict) of program behavior including but not limited to type information. Whether the information necessary to do this becomes available before or during runtime, or is available from the program text or from some other source, are not distinctions I would use to limit those activities in any way. I would like to detect as many classes of errors as soon as possible; but I accept that for many of them the soonest possible detection may be at runtime.

So call my approach to it whatever you want, or advise me what term I ought to use to describe it. But it has become clear that both "static" and "dynamic" are scorched-earth terms defined drastically differently by partisans on both sides of the fence and in terms of other words defined with equally drastic differences -- to the point where any discussion starting with those terms will shed more heat than light.

Type theory is static in the

Type theory is static in the same sense that calculus is static. If you ask, "we're going to have more information available at run-time, so why don't we make calculus more dynamic?" the reaction is going to be 'wat', similar to the reaction you received here. Once you explain that you're just talking about doing ordinary calculus at run-time, the reaction changes to 'ok...why?'

No. Just ... No.

You're arguing now not about how to analyze or check a particular instance of a running program, nor even about what can be predicted about all possible instances of a program that can be run starting with a particular source code, but merely about who gets the right to use which words.

I'm interested in the former but not the latter. I'm not going to fight that fight.

Have a nice day.

By way of explanation...

That view of type theory does not encompass the case in which the source code may change during a program's run. Nor the case where the definition of a particular type may change during the program's run, without loss of program state and with or without simultaneous updates to live instances. I'm letting bugfixes (ie changes in the source code itself or the semantics of the language) take place without stopping programs or forcing them to lose information.

That is why what you're calling "type theory" is not what I'm doing. The definitions that support that notion of "type theory" are simply wrong as applied to my project.

This is Gabriel's "Language Theory" vs. "Systems Theory" divide. I'm working on a programming system, not just a programming language.

I cannot rely on a constant text. I cannot rely on a constant function mapping text to semantics. I cannot rely on the program's current state being one it could have reached after starting with the current program's code.

I'm doing analysis and inference (and runtime checking insofar as the information from the former two is incomplete) about types, but according to this view I don't have the right to call any of it "type theory." Program changes at runtime may falsify any portion of what I've derived and require reanalysis, and the consistency of program state in the face of changing program requires its own analysis.

I'm considering the whole thing, including dynamic tag checking where necessary, to be type theory. According to your view, it isn't actually type theory (ie, "static like calculus") but it certainly includes type theory as a subset. Anyway it doesn't matter what anyone wants to call it. Calling it anything won't change what I'm doing.

Ok, good luck

I think you'd still find type theory applicable in the same way you'd still find calculus applicable if you said of my physics calculations, "ya, but the gravitational constant is changing, too". Which is to say that if enough stuff is changing, it becomes hard to apply the theory, but it still theoretically applies. Anyway, we probably should have quit three inches to the left, but good luck with your system. Let me know when I can try it out.


Ray, frankly, all I read is more bogus assumptions about what established PL theory can or cannot do based on your misconception of what it is or does. Rest assured that it can encompass all of what you are looking for, but also requires developing a thorough and consistent semantic model of those phenomena. That's the hard part, and should you ever try to go through a comparable exercise in all seriousness you'd perhaps start to appreciate the terminological distinctions and rigor that the field, like any other scientific discipline, has established.

See what happens when PL

See what happens when PL designers, implementors, and theorists get together in one place?

My desk

My desk accommodates a PL designer, implementor, and theorist in one place, mostly without such problems.

There is definitely a case

There is definitely a case to be made that PL theory IS PL design and implementation, since they involve many of the same processes: you design a theory to model the world, you implement the mechanics to make the the theory work. You might even get a real programming language out of that.

The disconnect comes when we start thinking about designing for the actual human programmers who will use the language, or implementing messy systems that include very large programs, lots of non-determinism and interop with various crap external entities. Then we begin to see why PL theory might not be PL design and/or PL implementation.

When the connection breaks down and you don't notice it, we get into arguments like this, where everyone is sure about their truth.

Hear, hear

Hear, hear. Given that I'm a human progammer whose payroll is for the very messy job of implementing very "real-world" PLs (JavaScript) using very "real-world" PLs (C++), I can claim with confidence that I am very well connected to all three sides of that fence. And I can attest that most "real-world" languages suck big time, and that almost every single aspect in which they suck is due to the fact that they were "designed" in a semantics-free vacuum, and that even the slightest amount of attention to well-established theoretical principles and foundations would have made them suck two orders of magnitude less. ;)

Ouch! Bringing out

Ouch! Bringing out Javascript and C++ can never go wrong making any argument with those languages: "C++ therefore P" is always true as long as P sounds reasonable, since C++ gets everything wrong anyways.

I believe we call this common form of argument "Reductio ad C++"

Fair enough

Unfortunately, though, most of the rest isn't all that great either.


Too narrow
too say any

Type theory needs to be

Type theory needs to be extended to runtime. [...]

I just mentioned staging in my last post, which can be viewed as a way to do type theory "at run-time" (but really you're just running code that's doing static typing on some other code).

And certainly you can do what you're alluding to here, where you start running the dynamically typed code while static checking takes place in the background. That clearly doesn't require innovation in type theory. It's only relevant if your type checker is slow, though I think it is a good idea to let code run even if the type checking fails (particularly when it's only precise types causing the failure).

All of this is still considered static typing. If you're analyzing blocks of code at a time and tagging expressions, that's static typing. If you're tagging values and checking as you run, that's dynamic typing. You can believe that you've uncovered some big hole in type theorists' programming model that they just don't understand if you want to, but I advise you against making any big bets along those lines.

I can reason statically

I can reason statically before runtime about what possible values type tags can contain at runtime, in exactly the same way that people using static languages reason about what types their tagless values will contain at runtime. Using exactly the same process, FWIW. So I lose nothing by using dynamic types. QED.

I'd like to see you produce a dynamically typed program that does not have to check tags at runtime. If you can't, then you do indeed lose something.

The set of programs expressible in a typed language with a uni-type is strictly larger than the set of programs in a dynamically typed language, because it can express programs which don't have to check tags, as well as programs that do.

programs can be static even if the language isn't.

Producing a "dynamically typed program" is not the same thing as producing a program in a dynamically typed language.

If we are talking about a "statically typed program" -- ie, one that comes with a static type analysis and proof because it has been initially constructed in a statically typed language -- the interesting question to me is when we express that program using a dynamic language, can we analyze that dynamic-language expression of the program to recreate the type analysis and proof?

My primary goal is to create a language capable of expressing the semantics of programs written in other languages, without lowering the abstraction levels of those programs, ie, via a library defining the functions/syntax available in that other language combined with a shallow syntactic transformation simply expressing the AST of the source code in another form.

I'm defining "same semantics" as "having the same runtime behavior in terms of I/O." Whether or not type tags are checked, or indeed even present, is an implementation detail far below that level, so if there is something lost, then it's nothing I care about.

Applying static analysis to translated programs is a goal, and it is my sincere hope that in the course of building analysis tools to re-create the static proofs that other language systems have provided in the construction of those programs, we will learn something about analysis. But that isn't the most important goal, and if it turns out we cannot do so, then that counts as an important thing to learn too.

I think I see where you're wrong.

Wait... I think I spotted the mistake.

...all I said (or tried to say) is absolutely fundamental to what types, type systems, and type theory are. From the POV of PL, a type is a static abstraction of what values an expression could evaluate to at runtime. A type system hence requires always assigning types that capture all possible outcomes. ....

The two crucial words -- "at runtime" -- reveal that you're not actually talking about types and the ability to detect and report type errors; You're talking only about the ability to detect and report type errors BEFORE RUNTIME.

IOW, you mean what can be done in a static type checker, which is a useful tool for almost any language, but a very small subset of what I've been thinking of types and type theory as including.

Static type checkers are not "absolutely fundamental to what types, type systems, and type theory are." That's the laughable mistake that's had us talking past each other and unable to comprehend each other.

The ability to detect and report type errors -- AT ALL -- is absolutely fundamental to what types, type systems, and type theory are.

"Type Theory"

The term type theory, which dates back to at least the Principia Mathematica, originally published in 1910, is indeed usually about static classification of terms.

Edit: The correctness of this comment doesn't seem to have been preserved during rewording. Principia uses the term 'type' (in the same way) but, AFAIK, not 'type theory'. My guess is that 'type theory' came much later.

Heh. Of course. In 1910,

Heh. Of course. In 1910, lacking actual computers, they couldn't do anything EXCEPT static theory!

Sure, I don't have a problem with static type theory. I like static type checkers, and would want to provide one with any language. But holy cats, thinking that's the ONLY tool in the bag is nutty, and refusing to start a program until a type proof has been constructed is an unhelpful delay in most cases.

Okay, static checking is more important in mathematics where nothing is real unless you can prove it. But its methods are applicable whether you deprive yourself of runtime typetags or not, and additional expressiveness is available if you don't.

My goal in language implementation is creating a translation-target dialect. Programs originally coded in statically typed languages will come with type proofs. That's nice. I will want to be able to use type theory to reconstruct those proofs, and will be constructing further tools to do so. There's nothing wrong with that.

Programs originally coded in dynamically typed languages usually won't come with type proofs. But tools that can re-create the type proofs for statically typed programs will usually be able to construct type proofs for those programs too. And the rest? The programs I can't provide type proofs for? If they are useful, they are probably type correct regardless of whether I can prove it.

In a translation-target dialect I will be obliged to be able to represent all of these classes of programs, so a static type system as usually implemented would actively impede my progress.

Wait... there is something deeply wrong here.

Why is doing type dispatch on method calls different or more acceptable to (static) type theorists than doing type dispatch on continuation (return) calls?

Function calls and function returns are completely isomorphic, as one can prove by transforming a program into continuation-passing style. What possible justification is there for considering the first case "sound" and the second case "unsound"?

The more I think about this the more fundamental a question it seems to become. I may start a new thread about it.

Type systems aren't just

Type systems aren't just about checking type correctness anymore.

Where they ever?

Known unsound is better than unknown unsound

Regardless of any value judgment on Dart's type system, I think there is a pretty big difference between localized, well-understood sources of unsoundness (for example characterized by a well-delimited set of dynamic checks) and potential, unknown source of unsoundness.

Besides I think the whole subthread about "practicality" you spawned is maybe a bit off-topic, as I'm not sure the work being discussed is oriented at toolability. The way I understand it, at least, it is more about understanding the ingredients that allow to statically reason about the strong form of dynamism of the Javascript language, which is an interesting and important endeavor in itself.

It's great if it ultimately results in better tools (or at least well-understood implementation strategies, etc.), but I don't think it's relevant to ask yourself, when reading this present paper, "will it help me with auto-completion or to produce better warnings?". In my (subjective) opinion, soundness would be a better way to evaluate the middle-term success of this particular and impressive piece of work.

That would put the work in a

That would put the work in a very good perspective. If this is what they meant, I wish they would have said this explicitly.

I agree; precise statements

I agree; precise statements of type soundness are usually either wrong or proven correct!

When we go about proving soundness, we will likely be able to start with the (pen-and-paper) proof techniques from related type systems that support strong update (i.e. Alias Types), even in the setting of dependent types (i.e. Low-Level Liquid Types). Like you said, any latent errors are probably the kind that are reasonably easy to correct.

Famous last words

any latent errors are probably the kind that are reasonably easy to correct.

Famous last words ;). I have become extremely skeptical of assumptions like that (which is also why I disagree completely with Sean's bold mischaracterisation above).

Unreasonable expectations

IMO, the biggest stumbling block for real-world JavaScript researchers has been a tractable yet representative semantics to prove soundness against. That's started to change with Arjun and Joe's work. After that, it's unclear to me how much benefit canonical mechanized proofs have over fuzzing (e.g., PLT Redex) and web-scale testing. The trend, before their work, was each paper introducing its own ad-hoc JS semantics that, in practice, skipped much of the tricky stuff. We'll see in the next couple of years if they did enough to make it sufficiently easy to work with :)

Consistency of the formal system vs. formal/unformal mismatch

The causes of unsoundness I was thinking of are more of the form that appear when you mix subtle form of polymorphism with subtle form of mutable state, where it is the theory itself that is inconsistent if you don't restrict it in some ways. This can be exhibited even with a relatively toy language.

It is rather orthogonal to the fact that even a self-consistent formal system may not actually correspond to the real-world object under study. Indeed, lambda-JS is a fabulous piece of work in this respect, and I hope we can get an environment where specifiers and implementers of the language interact with the researchers that write and test these precise specifications.

The pure nested refinement

The pure nested refinement logic of System D is still at the center of DJS. On top of that, we layer support for mutation.

PDF in Chrome

The PDF for this paper is unreadable in Chrome (v. 22) on my laptop.

The diagrams and figures look fine, but the main text is badly jumbled.

Works in Chrome 25

Perhaps there was a Chrome (Foxit) bug, now fixed. PDF.js in Firefox 19 works well too. Adobe Reader 10.1.6 also renders the paper correctly.


Worked in Chrome (v. 25) for

Worked in Chrome (v. 25) for me.

Can anyone tell me if this is for real? Is there some magic they are getting from SMT solvers that makes this decidable/practical?

No magic, AFAICT

From a quick read of the paper, it looks like they had to put an impressively wide variety of techniques into play, but each individual technique looks reasonably familiar.

The basic engineering task to getting SMT solvers to work well is to avoid forcing them to do things they are bad at: inferring existential instantiations or (worse) quantifier alternations, and (worse still) inferring loop invariants.

Since the language is explicitly typed, you have a type annotation for all functions, including recursive functions. This means that the programmer is supplying the loop invariants, and the system doesn't have to guess them. Deep quantifier alternations are mostly ruled out syntactically, since the language of types supports only allows prenex universal polymorphism, and no explicit existentials. Existential quantification is introduced in a controlled way: the predicate language has no forms of quantification, except via the comprehension types {x | p}, which is a rather well-behaved existential introduction (it's somewhat similar to the restriction that makes typechecking GADTs relatively tractable).

However, the addition of mutable state also introduces lots and lots of potential existential branching -- you have to consider whether each pair of values in scope may or may not alias. (This is O(n^2) bad!) They get around this by exploiting the fact that Javascript creates objects at the drop of a hat, and that fresh objects have no aliases. This lets them use an alias type discipline (a variation on linearity) to precisely track the state of uniquely-owned objects. When values become aliased, their type system moves to forbidding strong updates, which frees them from having to worry about aliasing.

The main thing that's surprising to me is the way they exploit the uninterpreted constants support of the SMT solver to interleave subtyping and SMT solving. I don't know if this is well-known in the SMT world or not (probably not, since they seem to have a recent POPL paper about it?). Either way, it's pretty clever.

updated PDF

I fixed the link to point to the camera-ready version of that paper. Please find more info at

Getting Started Threads

I guess it is time to revisit our getting started threads.

The long and short of it is that in some circles types means static types, and refer to what can be done with values, whereas in other circles types mean classification of run time values. Most interesting work on type systems belongs to the first camp. Interesting work on other things happens in both camps.