Dynamic vs. Static Typing - A Pattern-Based Analysis

In some cases, static typing is more error-prone than dynamic typing. Some statically typed languages force you to manually emulate dynamic typing in order to do "The Right Thing".

The writer takes on Java. He mentions three problems:

  • Statically Checked Implementation of Interfaces
  • Statically Checked Exceptions
  • Checking Feature Availability

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Checked exceptions

Are there any other languages with checked exceptions? I mean, can I concede that Java is Trouble and just move on to points 1 & 3?

Haskell's type classes

Haskell's type classes allow empty method bodies, thereby avoiding the author's first problem.

[Edit: I misunderstood the third problem]

Third problem

The third problem seems to be about the danger of shared mutable state, rather than about static typing.

Must...avoid...thread...

I was trying to avoid commenting on that document, but you pulled me in. :(

My take on 1) is that an interface is a contract. If you aren't willing to fulfill the contract, don't implement the interface. What good is an interface if a user tries to call one of the functions and nothing happens? It's really about enforcement vs. freedom. Static checking forces you to be precise and complete. That's because the compiler won't miss mistakes like a human will. That's a good thing. You are always free to throw NotImplemented, and you *don't* need to be specific in the error message, because in Java you can just print out the stack trace at the catch point. Dynamic typing proponents don't like compilers telling them what to do. They are closet anarchists, of a kind. ;) That's fine, but the compiler is only trying to help. That's why I always convert warnings to errors. If the compiler doesn't like something, there's usually a pretty good reason why.

On 2), I agree that statically check exceptions seemed like a good idea at the time, but turned out to be a less-than-useful mechanism. I don't think that's a condemnation of static typing in general. I think it just highlights the fact that adding exceptions to types increases coupling, thereby decreasing modularity (think generics).

On 3), that's what reflection is for. It's unfortunate that C++ is so far behind on that, but Java does just fine. I'm not worried about threading considerations because threading is obviously a very immature and clumsy mechanism to begin with. It's very hard to do it right, and even when you do, there's little guarantee that you're better off than if you hadn't used threads at all. Threading is too intrusive to be really useful yet.

My feeling is that if this is the best criticism that can be levelled against static typing, then static typing is sitting pretty well. But then, I think that the middle road is best anyway.

Idle thought

Dynamic typing proponents don't like compilers telling them what to do.

I wonder if people would get on better with static typing if it was tied to the version control software, rather than a compiler? i.e., if it was "cvs commit" that told you to fix your type errors. That way you can avoid the "nagging" compiler while doing your own work, but the rest of the team knows that commited code at least type-checks.

Doubt it

The early warnings/errors are not the real disadvantage. The point is more that static typechecks with existing languages force you to structure programs to suit the particular, single static type system that the language in question uses.

To take an example, say a program reads in some XML, at runtime, and ends up with a list of values of different types. Dynamic languages have no issue here; in a language like Java, you need an array of Object, which doesn't deal well with primitive types like int and boolean, hinting at the type system issues to come; and in languages like Haskell/ML, you need to create a union type to handle all possible types of values you expect to have in the list. (The OCaml variant just announced would handle this better.)

Now, say that based on runtime inspection, you determine that this list corresponds to a certain structure with fields of certain types. In a dynamic language, having made that determination, you can simply start treating the list as though it were that structure, if you choose to, and from then on use appropriately-named accessor functions which assume the types of the fields are correct. You're no longer treating it as a heterogenous list, you're treating it as a typed structure.

If you write your code properly (and test it), you can "know" that the value is correctly typed, after having performed the appropriate dynamic check, even if a compiler would have difficulty statically proving that.

To do something like this in a statically checked language, at the very least you would need to perform some kind of cast, if the language allows that. In this example, you could get away with such a cast in C/C++ if set up correctly, but you can't cast e.g. a list or array to some other object type in Java, and you can forget about doing something like this in the major statically checked functional languages - you'd need to copy the values into a newly allocated, typed structure.

Now, you may not want to depend on these kinds of shortcuts in a major mission-critical system. However, one attraction of dynamic languages is that they support quick prototyping well, which is exactly where these sorts of shortcuts can be useful.

Combined with testing, assertions, contracts, and other good practices (including a more disciplined use of types than the one I've just described), you can scale up from a prototype to a more disciplined system without changing to a statically-checked language. As is often the case with the mainstream languages, experience with the popular dynamic languages may tend to make people doubt that this is possible, but in languages like Scheme, Oz and Erlang, it is certainly possible.

These kinds of issues can't be meaningfully boiled down to soundbites like "Dynamic typing proponents don't like compilers telling them what to do". (Or, for that matter, to variations on Greenspun's law. ;)

A more viable compromise between static and dynamic, with the teamwork aspect in mind, would be to perform static checks at module boundaries, the sort of thing described in e.g. Soft Interfaces: Typing Scheme at the Module Level. That way, the checks take place where your teammates really care about them: at the module interface which they're going to use.

A more viable compromise betw

A more viable compromise between static and dynamic, with the teamwork aspect in mind, would be to perform static checks at module boundaries, the sort of thing described in e.g. Soft Interfaces: Typing Scheme at the Module Level. That way, the checks take place where your teammates really care about them: at the module interface which they're going to use.

Well, that changes the where, but I was talking about when the checks get run. You could perform these modules checks at check-in vs compile time too. Or rather, allow dynamically-typed development of a module at an interactive shell, and do type-checks as part of check-in, and leave the compiling to somebody else entirely... (e.g., a nightly automated build system).

Regarding heterogenous lists, isn't this what HList is supposed to solve? It seemed fairly involved when I looked at it a while ago, though. Does anyone else ever just want to be able to use a type-class where they can use a type? e.g. to say "this list can hold values of any type, so long as they support X interface".

Delaying doesn't help

Well, that changes the where, but I was talking about when the checks get run. You could perform these modules checks at check-in vs compile time too.

Sure, but the more basic point is that when the checks run doesn't matter if they prevent you from writing certain kinds of code.

Not the point

Well, all tests prevent you from writing certain kinds of code. That's surely the point. I agree though that free-form tests can be more precise about what is a bad program compared to most current static type systems. But anyway, I wasn't trying to argue that static typing is superior to dynamic typing, just that having to use static typing might be more pallatable to dynamic typing fans (like me) if it we only had to bother with it when we are fairly sure the code runs well anyway. That would avoid the "yes, I know that bit doesn't work yet, but I'll get to that later..." sort of reaction I'll admit to sometimes having to static type checkers. Of course, if you have a particularly dynamic piece of code that falls outside of what the static type system allows, then this strategy is going to be no comfort at all. But most code I write in dynamic languages certainly doesn't fall into this category. Given that most developers will almost certainly have to write code in statically checked languages at some point (by choice, even) then any technique for making that experience more pleasing seems worth it, even if it is just superficial.

Another example

Here's a nice bit of Java code I came across (here):
    if ((value != null) && !returnedClass().isAssignableFrom(value.getClass())) {
            throw new IllegalArgumentException("Received value is not a [" +
                    returnedClass().getName() + "] but [" + value.getClass() + "]");
        }

This is from a piece of code that's trying very, very hard to avoid the need for the definition of boilerplate classes when persisting classes representing enumeration types to a SQL database.

This code is actually doing a kind of dynamic typechecking, illustrating the following generalization of Greenspun's 10th Law: "any sufficiently complicated program in a statically-typechecked language contains an ad-hoc, informally-specified bug-ridden slow implementation of a dynamically-checked language." ;)

Heh...

But what you won't be able to present is an example of a dynamically typed language performing a compile-time type check, no matter how hard you try. Aye, there's the rub!

Already done

Kevin Millikin has already provided an example of a dynamically typed language performing a compile-time type check, in this comment.

If you're seriously interested in this subject, there's lots of useful info in that entire three-part series of threads. There's an index to all three threads here.

I don't think so

Kevin Millikin has already provided an example of a dynamically typed language performing a compile-time type check, in this comment.

I don't think his example demonstrates anything like that, because it is comparing apples (a statically typed language without staged computation) with oranges (a dynamically checked language with staged computation). Obviously, as soon as you drop staged computation into the game the static/dynamic or compile-time/run-time distinction becomes pretty meaningless.

Who gets to pick the fruit?

How come the statically type-checked language gets to do type checking in a separate computational stage, but the dynamically checked language is only allowed to use a single computational stage? Seems like an unfair and unrealistic comparison.

Obviously, as soon as you drop staged computation into the game the static/dynamic or compile-time/run-time distinction becomes pretty meaningless.

Yes, that's an aspect of the point I was getting at. I get tired of the usual perspective, driven by vanilla mainstream languages, of all-static vs. all-dynamic; in a discussion at that level, I like to point out alternatives, particularly when given a setup like the one in this thread.

Greenspun dualism

How come the statically type-checked language gets to do type checking in a separate computational stage, but the dynamically checked language is only allowed to use a single computational stage?

I'm not saying that. Of course, it's always good to point out and compare alternatives, but for a fair comparison you have to be honest about what you are comparing. AFAICS, the example demonstrates nothing more than the following: to do static checking, you either can use static typing, or staged computation.

Of course, it is not surprising that a different "static" mechanism can achieve static checks as well. But you need some such mechanism, a vanilla dynamically checked language does not have that expressivity. Hence static typing clearly adds a new level of expressive power (as does staged computation).

Considering the relative complexity of the two options, I'd think that there is no doubt that static typing has the upper hand. What Kevin did, showed traces of sort of a dual to Greespun's law: he was demonstrating an ad-hoc, informally-specified, bug-ridden, slow implementation of half of a type system ;).