Why is Static Typing Hard?

I’m working on a statically typed language and finding it slow going. In the history of dynamically vs. statically typed languages, it seems there is a trend: dynamic languages are implemented in a weekend, and refined iteratively for 10–20 years thereafter—static languages are defined and implemented over the course of 10–20 years, after which they’re basically complete.

The disparity is astonishing! Why do expressive static languages seem so much more difficult to implement? I contend that the field of type theory is not mature enough to answer the questions that dynamically typed languages ask.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Why is A simpler than A + B?

Why is A simpler than A + B? The answer is obvious. Dynamic languages just avoid static typing altogether and the performance and semantic feedback benefits of such, so not only do you not need to design and implement a type system, but neither do you need to write an optimizing compiler or an IDE that provides code completion.

I am not sure what you mean.

I am not sure what you mean. Of course, if you don't have to implement a type system there's less work to do.


To make this more concrete: with a type system there are two issues, designing the typing rules and language implementation. The first is hard if you want to do something new, otherwise I think it is fairly easy to find specifications of good type systems. As for implementation, Here too there are standard algorithms, unless you want to to something particularly novel. Most programming language texts cover those, I think.

It seems harder because you need to learn more

It is indeed a little harder to implement a simple typed functional language than an untyped one: you need to write a typechecker. If you know the right techniques, writing a typechecker with useful amounts of type inference is actually not terribly difficult: for example, I could implement a typed functional languages in a weekend or so (and have done so, in fact).

But if you don't know the relevant techniques, then you have to learn them, and that's always slower. Moreover, if you don't know the literature, then you don't know where to learn the relevant techniques, which leaves you with a bit of a chicken-and-egg problem.

Is there some specific type

Is there some specific type systems that you could recommend for somebody wanting to implement his first statically typed language? (I've done several dynamically typed ones, but I don't like that very much.) Btw, in which language did you manage to implement something in a weekend, am I right in guessing Lisp or Haskell or Ocaml, rather than C or C++ or Java?

Programming Language Zoo

Programming Language Zoo has a couple of small languages with type checkers.

Thanks, I'll take a look at

Thanks, I'll take a look at these (and PLAI and EOPL). If only the examples were in C++ :-)

Dear God.

Dear God.

I thought as much. But I'll

I thought as much. But I'll manage :-)

You know EOPL and PLAI,

You know EOPL and PLAI, right?

Well, if you are designing a

Well, if you are designing a mainstream OO language and want to include type inference, things get complicated very quickly.

I should clarify

If you know the right techniques, writing a typechecker with useful amounts of type inference is actually not terribly difficult

Yes, it’s relatively easy to use existing results to implement a language. I could write Hindley–Milner on a napkin at this point. The problem is that when you start going beyond that, nobody has any answers. Very few people in the world are qualified to help, and those that can are usually too busy to.

Contrast dynamically typed languages, where you’re free to innovate in language semantics and try unproven ground, because you’re not required to formally justify it in order to implement it.

If you don't know the relevant techniques, then you have to learn them, and that's always slower

Naturally. It has been valuable for me to learn about type theory on my own, though, because I have in the process learned how to learn more effectively. Reading papers is a skill unto itself!

re: I should clarify

I think that's the nub of the gist of the difference: " because you’re not required to formally justify it in order to implement it. "

You could attribute the difficulty of innovating with static type checking to the perfectly reasonable ineffectiveness of mathematics.

Progress in mathematics

Progress in mathematics almost demands a complete disregard of logical scruples.
    — Morris Kline, Mathematical Thought from Ancient Through Modern Times, §18.4.

Lisp

Based on the history of Lisp, I'd say dynamic languages can continue to develop indefinitely, not just for a couple of decades; which may explain why, with my interest in abstraction as a means to amplify power unboundedly over indefinite time, I've focused mostly on the dynamic side of things.

My own thinking on the difficulties of types: here.

Dynamic languages still have

Dynamic languages still have types (unless they are really untyped), you just don't have to write them down a lot and they are incredibly precise (control flow sensitive...obviously).

You can even have types in a statically typed language and never see them (e.g. via H&M). If a type is used in a program but the programmer never sees it, is it still a type?

Types are a state of mind

Types are a state of mind, I think.

On one hand, you can say that you have data structures and things that are true about those structures. You can then reason about your data structures, going from one proposition about your data structures to another.

On the other hand, you can say that you have data structures that have types. You can then reason about the types. Your reasoning rarely touches on actual data structures at all; the types become the subject of study in their own right. It is, in practice, a very different way of thinking, and seems to me (as I tried to express in my blog entry) to make it very hard to achieve abstraction that doesn't degrade with each new layer you build on top. The types progressively drag you down until you're imobilized, like a dinosaur stuck in a tarpit.

If the programmer really can't see the types, the types aren't there. But I don't think type inference is at all like not seeing the types. Look at all the work that goes into trying to devise better type inference systems; that's abstraction, with the twin disadvantages that  (1) ordinary programmers aren't even allowed to do it but must leave it to the high priesthood of language designers, aka "us", and  (2) when the abstraction does take place it has to directly confront types at their most esoteric.

Ask not what your program can do for your types...

Types shouldn't be about straight jackets, but about enabling semantic feedback that make writing code easier. We shouldn't be slaves to our types, but most modern type systems make us just that.

If one had different thoughts about what type systems should do, then one could imagine a more flexible type system with more powerful type inference, improving usability characteristics substantially.

If the programmer really

If the programmer really can't see the types, the types aren't there.

I can't agree to that. Types are properties of the program you've written that your language understands[1]. Just because you don't have to add explicit annotations declaring those properties has nothing to do with whether those properties are expressed and understood in your language.

Look at all the work that goes into trying to devise better type inference systems

There's very little work that's on type inference alone, ie. that doesn't also accompany a new type system. The only example I can think of is some work in intersection types, mainly because intersection typing is so precise that some type systems lose principle typing. The work on new type systems is where the better abstractions come into it. Inference has little to do with abstraction.

I also don't agree that type abstraction degrades with each new layer, or that they drag you down, but that's just another type system debate that's been argued ad nauseum, so I won't get into it.

[1] by "language understands", I generally mean "compiler understands", although if you want to be inclusive you could include the runtime of dynamically typed languages, though I'd argue these properties are extremely limited, perhaps even non-existent.

Emphasis on "really"

To be clear: my emphasis is on the word really. If the language uses types in a way that the programmer needs to be aware of, then the types aren't really unseen by the programmer.

Yes, the work I'm referring to involves a new type system. That's rather my point. An ordinary programmer doesn't get to design a new type system; new type systems are what those kinds of abstractions require, yet ordinary programmers aren't allowed to do that, putting those kinds of abstractions outside the reach of ordinary programmers.

If the language uses types

If the language uses types in a way that the programmer needs to be aware of, then the types aren't really unseen by the programmer.

I still don't agree. The types are there even if the language doesn't use them for, say, dispatch or something. Just because the types map perfectly 1:1 with your mental model doesn't mean they aren't really there.

new type systems are what those kinds of abstractions require

Once you support products, sums, some sort of continuations, serialization, and some means of composition/extension, you've covered probably 95% of what programmers do every day. Add higher-order composition/extension, and you cover another 4%. Every problem I've run into with C# can be traced back to lacking one of the above features, all of which have well known representations in ML-like type systems.

Furthermore, the types of questions I see on stackoverflow every day don't require such fanciful type systems. So my experience doesn't agree with your claim that abstractions programmers need require new type systems. Take that anecdotal evidence as you will.

The only new abstractions of interest to common, every day programming consist of adding more precise static checking, like contracts and refinement types. Even then, programmers can and do extend these type system, as Microsoft has done with their Code Contracts tool. All that's required is an accessible code format. Types don't really impede this sort of reasoning. Certainly if you want typed metacircularity, that's still something of an open problem, but type enthusiasts have no problem with some runtime checks where necessary.

Every once in a while I hear

Every once in a while I hear some variant on the theme that abstraction is hereafter only of interest within a narrow range, basically a solved problem; I was told that when I entered graduate school in the late 1980s. It puts me in mind of the belief (I find it in Wikiquote attributed to Thomson, 1903) about pre-Einstein physics that "our future discoveries must be looked for in the sixth place of decimals." I think there are always fantastic new developments just waiting for us to discover them, and in programming, abstractive power is the key to enabling their discovery.

I think there are always

I think there are always fantastic new developments just waiting for us to discover them, and in programming, abstractive power is the key to enabling their discovery.

I agree, but I doubt any such developments will look anything like current dynamically typed languages either, and they are just as limiting given their primitives in this regard. For instance, the Bloom EDSL is something along the lines of a paradigm shift, but it's just as easily expressible in a statically typed language.

Working on it.

Working on it.

Types dragging you down...

I agree that many type systems have this problem, especially those focused on nominative typing. The types are not compositional, can neither be composed by a few standard operations into sensible larger types, nor decomposed into smaller ones. Many types and kinds violate the one-responsibility rule and the open/closed principles.

But I think we can do a lot better. Dependent types. Structural types. Substructural types. Carefully separating responsibilities. Keeping types open to extension.

We've experienced many languages that fail to achieve higher levels of abstraction. And we've experienced a few languages that reach very high levels of abstraction (Lisp, FORTH, etc.). I think the same will be true for typeful programming.

And I wish you luck with

And I wish you luck with that approach, but I don't believe in it myself (I love your enthusiasm for it; ideal is people with different ideas pursuing all those ideas enthusiastically, something I have an unfortunate tendency to treat as if it didn't need saying). Seems to me you're talking about ever-more-complicated type systems, and the complexity is an important part of what causes us to get tangled up in the typing.

Finding a clean and simple way to express something is generally aided by flexibility in how one is allowed to express it, and my sense is that types are at their core an inflexible way of expressing propositions about programs — natural for some things but not others, so that framing everything that way is likely to stifle generality.

ever-more-complicated type

ever-more-complicated type systems

The types I'm discussing have fewer, more precise, orthogonal responsibilities. Further, with dependent typing, one has the flexibility to find clean and simple ways to express many properties. I believe the resulting model is much simpler, and much more likely to avoid a mess of future type system extensions.

complexity is an important part of what causes us to get tangled up in the typing

True. That's why it's so important to simplify composition and manipulation of types, not just the properties the types express. I see nominative types to be a terrible barrier to moving beyond 'shallow' uses of the type system.

types are at their core an inflexible way of expressing propositions about programs — natural for some things but not others

There are many ways to express propositions and reason about programs. One can use process models, effects algebras, staging, capability security model. But types tend to augment these other techniques, not interfere with them.

There are several roles that

There are several roles that a type system fulfills:

  • Sanity checking
  • Guiding program design
  • Documentation & readability
  • Performance
  • Program inference
  • Module contracts

In a dynamically typed language in order to do sanity checking you have to write a test case and run it. The kind of sanity checking that most academic work on type systems focuses on is that you write a program, you run the compiler and then it tells you about silly mistakes that you made. This is sanity checking after you write your code, but before you wrote any tests. This way you get sanity checking a little earlier and easier than with a dynamically typed language. In a modern IDE like Visual Studio you get sanity checking as soon as you type in your code. The IDE will immediately display a red underline in places where you made a silly mistake. This is even earlier and easier than having to run the compiler on your (complete) program. Then you have sanity checking before you even wrote your code: type based autocompletion. That way you avoid silly mistakes as you are writing your code, so yet earlier and easier.

The second role of type systems is guiding program design. If you program in a dynamically typed language, you are running a kind of search process in your mind to find the program that does what you want within the space of all possible programs. In a statically typed programming language you are running that search process in a constrained space: only the space of programs that are well typed according to the type system. This can be a disadvantage, because there is a mental burden of having to stay in that constrained space; you have to keep the type system in mind all the time. On the other hand, the programs that fall inside this constrained space are generally much more likely to be well designed than programs that fall outside of it. The type system guides you toward well designed programs.

The third role is documentation & readability. The type of a function gives you a very concise and machine checked summary of what it does. It doesn't give its complete behavior of course, but after you know the type signature it's much easier to understand the actual code that implements the function. The types give you information about what kind of data the parameters are, instead of having to infer this from the context or from a comment.

The fourth role is performance. With a static type system you do not need to do run-time checks on every operation. This is a good performance improvement, but the big wins come from a different source: representation selection. In languages like C++, C# and Rust, types allow you to specify the representation of values. Instead of having pointers and type tags everywhere, you can have value types that are represented directly. Instead of an array of pointers to complex point numbers, you directly have an array of complex numbers (pairs of floating point numbers). To a lesser extent languages like Haskell also allow you to use types to specify the representation of values, but because no monomorphization is done this is not nearly as effective as in those other languages.

The fifth role is program inference. Because the types of a program constrain the program itself, you can sometimes infer parts of the program if they are constrained enough. Type classes in Haskell and implicit parameters in Scala are examples of this. A subexpression can be inferred from the surrounding code's types.

The sixt role is module contracts. When you write a module, you have some expectations about what kind of values should be passed in to functions of that module, and what kind of values that module's functions will return. Types give you a declarative language to specify this information, with the added benefit that it's automatically checked.

code completion \not\in sanity checking

Code completion isn't always, or even often, about sanity checking. It is more like an exploration aide: given the types I've chosen in my program so far, what could I do next? Type annotations often get in the way of this as they have pre-commitment issues, but an inferred system can avoid that.

You're certainly right that

You're certainly right that it's not about sanity checking, it is about sanity. Code completion shows you all the sane options. That I called this "sanity checking before you even write the code" is unfortunate. I tried to show the progression from full program type checking when running the compiler -> immediate type checking feedback in the IDE -> code completion. Each case is a user interface to the type system, but giving feedback a different stage: after writing the expression -> while writing the expression -> before writing the expression , respectively.

Let me adjust my position a

Let me adjust my position a bit: programmers might make mistakes in previous parts of our program that, in the interest of aggressively preserving well typed-ness, obscure choices that we would really like to make later. The problem of code completion is then more stochastic: the programmer probably wants to make a choice that preserves good typing, but they sometimes want to make a choice that breaks this property, with the intent on fixing other constraints in the program that were already expressed. In code completion, this problem manifests as obscuring choices because they are insane, assuming what was written already is what the program wants.

Avoiding strict sanity and instead "capturing programmer intent" can help us (language designers) avoid pre-commitment problems; that the written program only probably represents programmer intent. Safety/sanity properties are only one aspect to be used in figuring out what the programmer's intent is.

Dynamic languages sometimes "win" by avoiding pre-commitment problems: the compiler doesn't stop me from writing insane code, and this might allow me to converge on the intended solution more quickly. I suspect this is a significant reason in why many programmers do not like static typing, even if it can provide better feedback overall.

Programmers might make

Programmers might make mistakes in previous parts of our program that, in the interest of aggressively preserving well typed-ness, obscure choices that we would really like to make later.

I have been thinking on this recently, and am toying with a type system to help address it. Many type inference algorithms, and program checking algorithms generally, produce awful error messages by arbitrarily assuming that earlier deductions are true. If later deductions are inconsistent with established “fact”, they are assumed to be in error.

What you really want is for the propagation of type information to be directionless, and simply report inconsistencies wherever they manifest. Then a user can add directionality by marking an inconsistency as intentional—e.g., I mean to change this variable from a float to an int, so please mark inconsistent uses as errors.

Dynamic languages sometimes "win" by avoiding pre-commitment problems: the compiler doesn't stop me from writing insane code, and this might allow me to converge on the intended solution more quickly.

Some inconsistencies are genuinely fatal (“You are trying to call a nonexistent function” or “I don’t know what representation you want me to use for this variable”), but most can be postponed until runtime while you figure out what you want your program to mean. In Haskell I spell this: error "TODO: zonk the skolems".

We might try for a type

We might try for a type debugger instead. No one has done a decent one yet, but if you give up the ideology that error messages can be perfect, then it seems that the next best thing is to help programmers diagnose type errors interactively.

Type error slicers

There has been a significant amount of work on type error slicing and similar techniques over the last 20 years, including some nice tools. But they haven't caught on in practice. My impression is that most programmers do not feel a dire need for such tools once they have sufficient experience with a given type system and its error messages.

Slicer usability

The tool has to be easy enough to use to provide a net benefit. I was thinking more along the lines of a system based on interactive navigation instead of just presenting an entire slice up front.

Combining this with dynamic debugging might be better: rather than choking on type errors statically (they can still be informational), choke on them when they arise dynamically, and allow for conventional debugging at that point. The value then is in using static type information in the debugging process, and typing can still provide useful static semantic feedback.

ML and Haskell

I'm not convinced by your view of the "type-system error" state of affairs. Hindley-Milner's tool of trade, unification, is bidirectional/directionless (directionless?) and can report all locations participating in the unsatisfiable unification constraint. I think it is folklore knowledge that directional systems may help track information flow more precisely and refine error messages.

Reporting all program points that participate to the conflict can still be overwhelming to the user. You should have a look at the nice work by Zhang and Myers accepted for next POPL on using bayesian inference techniques to reliably detect the "likely guilty" in this set, Toward General Diagnosis of Static Errors (technical report).

Something that you're right is inflexible with HM inference is let-generalization. Basically the code is typechecked let-by-let, and after processing each let it is generalized and considered to be perfect -- inconsistencies are detected inside the currently-checked code and the *interface* of past declarations, but won't tell you that the implementation of past declarations may be wrong. This is related to the "principal types vs principal typings" debate that is (rightly) dear to intersection types worshipers. Full-fledged intersection types inference doesn't sacralize definitions of the environment in that way.

Re. the error "foo" way to leave program holes. GHC got support for delayed reporting of typing unification errors (that let you run partially-correct programs upto the point where the type environment becomes inconsistent, which is not exactly equivalent to dynamic checking but still lets you try stuff), and is getting first-class support for "holes" that let leave code unfulfilled in a more explicit manner.

Not only are there many roles...

Not only are there many roles, but multiple roles are often fulfilled by a single brief annotation. We expect, eg, "float " to cause some static checking, then to affect method dispatch, not mention determining physical data layout. it's kind of great that they can, but it must make type theory difficult. It also makes it difficult to understand what types are, since, despite the simplicity of typical annotations, there's a lot going on.

Dynamic languages still have

Dynamic languages still have types (unless they are really untyped), you just don't have to write them down a lot and they are incredibly precise (control flow sensitive...obviously).

I suppose it depends what you mean by "precise" and "types". A workable definition of type system from Types and Programming Languages:

A type system is a tractable syntactic method for proving the absence of certain program behaviours by classifying phrases according to the kinds of values they compute.

Dynamic languages can prove the existential presence of bad program behaviours, but not their absence.

You could very liberally read the above and say the "type tags" are proof that values aren't used improperly, but really, they're really only proof of memory safety, ie. reading a value tagged X as a value Y could very well lead to a segfault. That's the only program behaviour dynamic languages exclude by classification. This proof of memory safety is very precise (control-flow sensitive as you say), but not very interesting as a type, since these languages generally don't even allow the expression of unsafe memory operations.

Dynamic typing is incredibly

Dynamic typing is incredibly valuable, especially in the absence of static typing. They tell you if your current operation is safe or not, with the ability to provide feedback to the programmer that something is wrong at a specific point vs. silently continuing and causing corruption that is harder to debug.

If we are going with a experiential humanistic definition of type system, then dynamic type systems must be included as a very necessary and interesting point between untyped and statically typed (and indeed, many static type systems are complemented with dynamic typing).

Dynamic typing is incredibly

Dynamic typing is incredibly valuable, especially in the absence of static typing. They tell you if your current operation is safe or not, with the ability to provide feedback to the programmer that something is wrong at a specific point vs. silently continuing and causing corruption that is harder to debug.

I'd express it as, safe languages tell you if your current operation is safe or not, with the ability to provide feedback to the programmer that something is wrong at a specific point vs. silently continuing and causing corruption that is harder to debug.

Whether a language is safe doesn't depend on whether the language is dynamically checked or statically typed. It's an orthogonal property of a language declaring whether a computation ever gets stuck. You can have a dynamically checked variant of C, for instance (and it's been done).

Safety is quite different

Safety is quite different from debuggability. When debugging and diagnosing, we want to know when I have a problem as soon as possible, while safety preserves certain invariants that may or may not be related to debugging. Limiting dynamic type checking to maintaining that memory is used safely is nice, but one could definitely go farther; e.g. not implicitly creating a quack method for an object that doesn't already have one.

Type theory is foundational math

John Shutt:

My own thinking on the difficulties of types: here.

Without going into detail about your blog post, let me remark that you are already mistaken about its very premise: that mathematics has no native interest in types. If you take the last 4 decades of constructive mathematics into account then intuitionistic type theory, not set theory, is the very foundation of much of what is going on there. See e.g. this blog post for a math discussion, which argues for its superior abstraction capabilities. And the close relation to computer science and computational theory is particularly intriguing, though unfortunately not well understood by the majority of language inventors.

As see on TV

I said mathematics has no native interest in types as experienced in programming. A major difficulty in this area is that the different positions are based on nearly-orthogonal conceptual frameworks, so that advocates of any one position end up saying things that are irrelevant to advocates of other positions (as with Mitch Wand's 1998 result and my vau calculus). Thus, folks from one school of thought find the writings of other schools to be missing the point.

I tend to be highly suspicious of reasoning that isn't stated succinctly, a tendency not much mitigated by awareness that I too struggle to express my reasoning succinctly. Lack of simple explanation doesn't invalidate an idea, but it doesn't validate it either. Admittedly, looking over the un-short blog post you refer to, anyone who says "on the gripping hand" can't be all bad.

The intriguing bit

Well, the intriguing bit I alluded to is that much of intuitionistic type theory in fact deals with types exactly as experienced in programming, and mathematical constructions require them for many of the same reasons. Typed lambda calculus, polymorphism, type abstraction, inductive types, etc., which are the bread and butter of contemporary programming language theory, all where invented by constructivist mathematicians.

Although I do get the

Although I do get the impression you're missing the point I was making in my blog post, granted the type foundational approach to mathematics is, well, mathematics. Though I rather expect the experience of it can't be the same in a mathematical context as it is in a programming context. The mathematicians can abstract away from whether they're using typed foundations in a way that programmers can't, exactly because of... whatever-it-is that distinguishes mathematics from the tarpit that types become, for some reason or other, in programming.

Homotopy Type Theory

The recent development of Homotopy Type Theory seems a very promising unification of math, programming, and type theory.

(Hmm. Why hasn't HoTT hit LtU's front page yet? It's all over twitter. Bob Harper has been teaching a class [video] on the subject for months now.)

Now that you mention it,

Now that you mention it, one would think it would be front-page material. (Would it surprise you to hear I'm skeptical about it? No, I didn't think it would. But being skeptical isn't remotely like opposing discussing it; I'm with you in surprise it hasn't made more of a splash here.)

There are reasons to be skeptical

Dependent type theory suffers direly from a lack of modularity, as Bob admits in his class's 8th lecture. It's something I've run up against during proofs in Coq, and his proofs even in the context of HoTT. Regardless, HoTT is an improvement in proof-relevant mathematics. A colleague trying to formalize a new logical relation in Coq has run into the need to make his language's typing judgements proof-relevant, which throws him head-first into needing HoTT to tackle problems.

Equality has always been a big pain in type theory. It lead to ETT, OTT, heterogeneous equality, blah blah blah. HoTT makes good sense of equality, but it's still in a strange place. I'm keeping my eyes on the subject to see how it further develops.

Not modular?

I'm not sure what you mean by "lack of modularity" for dependent type theory, and I didn't find where to look for Bob Harper's "8th lecture" (of which course?). Would you care to explain a bit what you mean?

I assume he means the 8th

I assume he means the 8th lecture if you follow the link (and click on Video Recordings). It's from the "Advanced Topics in Programming Languages" course.

Further speculating about the meaning of "lack of modularity" in dependent type theory, I'd guess he's talking about the way that definitions are used in reductions. When you expose definitions, the system can work out intensional equality and you have to do less swizzling (manual insertion of isomorphisms) than if you hide definitions behind abstract interfaces.

It is non-modular in the

It is non-modular in the sense that to use a function in a proof, you need the code of that function, and the way your proof is structured depends on the code of that function.

How strange!

I independently started going down this route just today while brainstorming about alternate bases for type systems. I thought “Why not topology?”, started investigating, and it turned out somebody had beaten me to the punch.

Right now they seem to be focused on proving homotopy-theoretic results in existing type systems (Coq), whereas I’m more interested in a language with a topological type system. It looks like fertile ground for new work, either way.

You know the way it works --

You know the way it works -- someone needs to post it!

Are you implementing it in a static or dynamic language?

I wonder if that makes a difference..

No difference between "static" and "dynamic" with type "Any"

There is no fundamental difference between "static" and "dynamic" languages that have type "Any."

the differences between the

I take that to mean, the differences between the two are things you don't think are important.

You can imagine, and indeed

You can imagine, and indeed there are examples of, statically typed languages that are completely unsafe type-wise dynamically. C and C++ without RTI are good examples.

?

?

I don't agree with your thesis at all

That static languages stop evolving. I mean, in the last couple of years, Ocaml has added GADTs and First Class Modules, among other things. And Haskell has added polymorphic kinds, data parallel haskell, safe haskell, etc.

What static languages tend to be is more well-defined up-front. But I think this is primarily due to the simple fact that to do a statically typed language, you need to know the theory- and thus you're more likely to know more languages. And know what works and what doesn't. And I'm not just talking about other functional languages (although that's important), I'm also talking about more mainstream, popular languages. Phil Wadler and the Simons are a lot more familiar with OO languages than Matz is with functional languages. When they jump into real terra-incognita they're as likely as anyone to thrash about. Scala is a wonderful example of this.

There is also simply the demands of a used language. Developers don't like it when code they wrote years ago suddenly breaks because of a compiler upgrade. Even dymanic languages are immune from this- witness the uptake problems that both Perl 6 and Python 3 are having. Once you get a sizable developer community, radical changes become much harder. This has nothing to do with typing.