## On the Revival of Dynamic Languages

The programming languages of today are stuck in a deep rut that has developed over the past 50 years... We argue the need for a new class of dynamic languages... features such as dynamic first-class namespaces, explicit meta-models, optional, pluggable type systems, and incremental compilation of running software systems.

## Comment viewing options

### OT request

Isaac, why not become an editor and post items of this nature directly to the home page?

### Lot to disagree with

While this paper is interesting, it also contains lots of claims to disagree with. Naturally, the worst part is about types. Here is my favorite:

Furthermore, static type systems can produce a false sense of security. Run-time type-checks (i.e. "downcasts") in Java, for example, can hide a host of type-errors.

I.e., they criticise holes in some type system - which are due to the presence of "dynamic typing" features like downcasts! - and then take that as an argument against static typing ... and for more dynamic typing (proposing to have dynamic languages with optional static typing). How self-contradictory is that?

That they seem to completely miss the point of type systems becomes apparent with the following principle they propose:

A type system should never be used to affect the operational semantics of a programming language.

Of course, this makes it almost completely uninteresting, in particular in a dynamic language! When you have a typed dynamic language, then types will appear in the dynamic semantics (as opposed to mere tags), and surely you want these types to affect the operational semantics. Otherwise why bother with them?

Of course, the usual confusion of typing vs. tagging is not missing:

those who believe that dynamic languages are evil because they are untyped (not true - they are dynamically typed)

Some other examples:

In Section 4 we explore the theme of type systems for dynamic languages (an apparent non sequitur)

It is not at all apparent to me.

Static type systems, however, are the enemy of change.
No need to comment on this...
Even languages that sport state-of-the-art type systems, such as (different variants of) ML or Haskell, struggle with overloading, polymorphism and re ection in the context of type-safety [Mac93].

Extensions of polymorphism (such as first-class poly-morphism in ML [Rus00] or in Haskell [Jon97]) exist but do not always allow for separate compilation (unless the type-preservation rules are relaxed) [KS04].

Not always? So? Note that [KS04] is about C#, and problems with first-class polymorphism there are likely to be rooted in peculiarities of the language rather than in the feature as such (which the other references prove to be unproblematic). This is an obvious attempt of a tendentious spin.

A much more reasonable, and interesting alternative, is to envisage a dynamic programming language into which various non-standard type systems could be plugged.

It is folklore that it is at least very hard to retrofit a useful type system onto a language that has not been designed with it in mind. It seems very naive to believe that externally "plugging" arbitrary type systems onto a language isn't even harder.

And so on. I'll stop here. There are some worthwhile points being made in the paper, but it shows where the authors are coming from. Altogether, I'm not even sure what their point is:

In this paper we take the standpoint that:
Inherently static languages will always pose an obstacle to the effective realization of real applications with essentially dynamic requirements.

... which seems like a rather vacuous statement to me.

### It's existing code, not language design.

It is folklore that it is at least very hard to retrofit a useful type system onto a language that has not been designed with it in mind.

Usually the language design itself poses few problems to grafting in a type system. The real problem comes from existing code written in the language, code that hasn't been written to work inside your type system.

So... unless you come up with a good mechanism that restricts pluggable type systems in such a way that they all interoperate nicely, I'm a bit skeptical of the idea.

I make that statement with a small caveat. Say you are writing some module that does something complicated, and you are paranoid about a certain type of error creeping into that code. In some cases, an internal typing discipline could prevent this... but I still see numerous problems with external dependencies of your module.

### claims to disagree with... my favorite

proposing to have dynamic languages with optional static typing

Yes, if a static language with runtime type checks "can produce a false sense of security" we might surely expect the same from a dynamic language with optional static typing. (Of course, "false sense of security" is pure FUD - we might as well say testing gives a false sense of security.)

(However, the complaint that some 'static language' doesn't type check until runtime isn't self-contradictory - it's a complaint that we paid for static type checks but didn't get them, with the 'dynamic language' we didn't pay for static type checks and didn't expect to get them.)

Maybe they meant, we want all these to be supported in the same language before we're interested?

### What would be the application

What would be the application of reflection in a statically typed language? Surely we know all about the type of a value at compile time? Surely it only even makes sense when objects have identity, and it becomes a way to have the programmer manually deal with types dynamically?

### Well Stated

Isaac Gouy: However, the complaint that some 'static language' doesn't type check until runtime isn't self-contradictory - it's a complaint that we paid for static type checks but didn't get them, with the 'dynamic language' we didn't pay for static type checks and didn't expect to get them.

I think this is a particularly good formulation, and gets very close to where dynamic language and static language fans differ. My reaction to this is frank incredulity that anyone would even advance the argument that because a language's type system doesn't capture all possible nonsensical terms at compile time, a better idea is to capture zero nonsensical terms at compile time!

### cost

frank incredulity

All things being equal - your incredulity would be fully justified.

From some viewpoints all things are not equal - as iirc you've previously stated - it's mostly about perceived cost-benefit.

### True Enough...

So let me ask: what benefit do I get from having zero constraints checked at compile time that justify whatever costs you perceive in having some, but admittedly never all, constraints checked at compile time? In fairness, I should warn you in advance that the stock dynamic typing answer, "faster development," won't fly with me given my direct experience with O'Caml and Tuareg Mode in EMACS.

Tuareg mode?

Tuareg Mode

### Indeed!

What I especially like about Tuareg Mode is the ability to type some source code into a buffer that I will eventually compile, but feed it to a running toplevel as if I were just using the toplevel interactively. There's also an option you can configure to have the cursor move to the next expression afterwards, so you can type N expressions, go the top of the buffer, then C-c C-e, C-c C-e, C-c C-e... There's also the support for O'Caml's excellent time-travel debugger. Of course, you also can indeed switch over to the running toplevel and noodle around with stuff you don't want to commit to a file.

In other words, Tuareg Mode is roughly akin to SLIME for Common Lisp (which I also use), or OPI for Oz (which I also use). The combination of Tuareg Mode and O'Caml offers the combined benefits of exploratory, interactive development in the same fashion as the dynamic languages and static typing in the fashion of Hindley-Milner type inferred languages.

It gets even better: there's a very nice make-like tool, OMake, that has an option, -P, that causes it to run forever, monitoring the filesystem with either FAM or kqueue, and to redo the build whenever it sees a change. So if you do this, and use Tuareg Mode to try things out in a buffer attached to a file being monitored, when you press C-x s, your program will automagically be rebuilt. How sweet is that?

### huh?

The question doesn't make sense to me: how could we use some benefit from zero checks to justify the cost of checks?

Let's say that language S magically provides dynamic type checks equivalent to language D, and language S also provides static type checks. I think the question is - do the perceived benefits of these static type checks outweigh the perceived costs, to an interesting degree.

And then of course, we complicate the whole comparison if we can't do the magic that provides equivalent dynamic type checks.

### Hopeful Clarification

Isaac Gouy: The question doesn't make sense to me: how could we use some benefit from zero checks to justify the cost of checks?

Sorry, I don't think I was clear: I was asking what the benefit of having no compile-time checks on terms is relative to having some, but not all, nonsense terms rejected at compile time. Generally, the dynamic typing answer to this is "you can develop more rapidly in a dynamic language," which isn't necessarily true. Sometimes the counter is that there remain terms that are correct but not well-typed in any given type system, say HM. While that's true in principle, it doesn't tend to come up in practice; that is, HM system users seem to have no more difficulty coming up with ways to express what they wish to express than users of dynamic languages do. So it's an honest question: if I can develop interactively in an HM language and don't run into any more expressive issues in my HM language than I do in, say, Lisp or Scheme, what's the dynamic language benefit that justifies sacrificing the static typing benefits?

Isaac: Let's say that language S magically provides dynamic type checks equivalent to language D, and language S also provides static type checks. I think the question is - do the perceived benefits of these static type checks outweigh the perceived costs, to an interesting degree.

There isn't really enough information here to even be able to guess. Is S type-inferred? What's the type system, i.e. how expressive is it?

But since you asked, OK: the more I think about it, and work through Practical Common Lisp using SBCL and SLIME to brush up on my Lisp, and continue working with O'Caml using the tutorials and Tuareg Mode, the more convinced I am that, in objective terms, static typing has irreproducible benefits over dynamic typing, with the only respect in which that isn't clear being reflection/introspection, which probably just means that we need to figure out what the metatheory behind not doing type erasure is. That probably means Monadic Reflection in one form or another.

### particular comparisons

There isn't really enough information here to even be able to guess

Which is to say, we aren't in a position to make universal statements about static languages and dynamic languages but we can start to ask questions about a particular static language and a particular dynamic language.

### Not Sure I Agree...

Isaac Gouy: Which is to say, we aren't in a position to make universal statements about static languages and dynamic languages but we can start to ask questions about a particular static language and a particular dynamic language.

But that's simply not true. I can make the universal statement that static languages guarantee me the absence of certain error classes at compile time that dynamic languages don't, otherwise, why bother? All I was saying was that I can't begin to guess what your "perceived costs" of static typing are. I can tell you precisely what the actual costs of dynamic typing relative to static typing are if we stipulate, as you did, that they were functionally equivalent apart from being in different phases: given the static language, the customer will never see them; given the dynamic language, the customer might.

I've attempted to anticipate your line of argument by discussing two common ones: rapid development and correct terms that lack a type in any given type system that we care to reach consensus on, with HM being merely one of several options. I can argue from direct experience of approximately five years in O'Caml and approximately 20 in Common Lisp or Scheme that those arguments are entirely anecdotal. O'Caml development is as rapid, if not moreso, as Common Lisp or Scheme development. The correct-but-untypable term argument ends up being an educational one; a novice, e.g. ML programmer, especially coming from the Lisp tradition as I did, will likely run into this issue at first, but once they become familiar with the approach to the problem domain that works in the ML family, they'll equally likely, to a first approximation, never encounter the issue again, and when they do, they'll have the appropriate intellectual tools to contend with it.

Frankly, even if static typing didn't confer a range of guarantees of preventing classes of errors from reaching the customer, the research on it would still be worthwhile, as it provides a set of principled underpinnings for understanding programming languages that prevents people from saying such things as "We aren't in a position to make universal statements about static languages and dynamic languages." Just because dynamic languages don't have such a set of defining principles, it doesn't mean that they don't exist!

### out of context

But that's simply not true. I can make the universal statement that static languages guarantee me the absence of certain error classes at compile time that dynamic languages don't, otherwise, why bother?

iirc the context was perceived cost-benefit.

As you said "There isn't really enough information here to even be able to guess".

those arguments are entirely anecdotal

Pardon me, but isn't you're testimony about O'Caml and Lisp also anecdotal?

### Things I've noticed about dynamic typing people

[Paul Snively]
Generally, the dynamic typing answer to this is "you can develop more rapidly in a dynamic language," which isn't necessarily true.

Totally. It is usually stated without evidence that static typing creates more work for the programmer. One of the pro-dynamic arguments I hear a lot is the "latent typing" argument. Java makes you create interfaces while Python lets you simply use the same method name across different classes. What they're really saying is that they'd prefer structural typing over nominal typing. It's not a static vs. dynamic thing (though structural type declarations can involve a lot of typing).

The tranditional benefits of static typing are already well known (performance, compile-time checks). It would be useful to collect a list of ways in which static typing makes programmer faster/easier for a programmer. For example, C++-style overloaded methods are nice to have. Refactoring and code completion are more robust. Types are also a good form of documentation (and I've noticed that library documentation in dynamically-typed languages usually includes precise type information in the comments).

### Fewer test cases

for the same quality code. Oh, and faster debugging.

This is an important- many dynamic language advocates seem to assume that unit testing is only usefull, or only done, in dynamic languages. Everyone benefits from test cases. This is, in fact, one valid way to look at static typing- it's a huge suite of automated unit tests.

So, the question then becomes how many test cases do you need to write to acheive the same degree of quality confidence in the code? With static checking, whole suites of test cases are not needed, as the type checker "tests" the code for you.

The other thing is speed of debugging. The nice thing about static type checkers is that 99% of the time they pinpoint the real location of the problem for you. Dynamic typing more often than not in my experience hides the problem, just to have it show up quite some time later. A classic example of this- accidentally putting a string into a list of integers. With static type checking, the error shows up when you try to put the string into the list of integers. With dynamic typing, the error doesn't show up until you are pulling integers out of the list, and hit the element that isn't an integer. With static type checking, I'm given the file and line the real error is at. Thirty second later, I'm back in business. With dynamic type checking, I'm now having to back track and try to find where the string is being put into the list, as the minutes pass by.

### A question

Given everything you say about testing and debugging (which I agree with), why is it that we so often hear "dynamic languages" being touted as offering "faster development"? Do they really offer faster development, or does it just feel that way for some reason?

### My Experience...

I know you didn't ask me :-) but my experience is that interactive development gives us a very short path from concept to an implementation. Test-Driven Development and Agile Development methodologies such as Extreme Programming, which come to us from the Smalltalk world, tell us that then what you do is robustify the implementation until it meets the goals set forth in user stories and is reasonably, but not overly, future-proof. So I suspect that it does feel shorter because of that initial lack of distance between concept and initial implementation. Nor do I think that feeling is invalid.

What I do question, however, is whether there isn't as much or more time in coming up with these near-100%-coverage test environments as there would be in developing in a good type-inferred static language, which, as I've already noted, can also be extremely interactive, and developing the somewhat smaller test harness for the results. The question in my mind is reinforced when I read, e.g. comments about how "mocking up and testing code is a lot easier without type checking getting in the way. I should think getting 100% coverage would be easier in a more dynamic language." Apart from being a searing self-indictment just on grounds of laziness, it clearly reveals that the programmer in question has never worked with a type-inferred language. And so it goes: generalizations about "static typing" are made on the basis of primitive type systems and/or systems lacking inference (I have to be careful here; one certainly can't justifiably call Java 1.5's type system "primitive", but the lack of inference hurts it tremendously).

But someone can rightly argue that my single example of an agile statically-typed language is O'Caml. Why is that? O'Caml certainly demonstrates that a type-inferred language with a relatively rich type system needn't be batch compiled, so why don't more such languages offer nice interactive environments? You might think that it's because such features hinder optimal code generation, but O'Caml remains faster than current GHC, all in all, as well as SML/NJ. Only MLton produces competitive code, but MLton is a much slower compiler and doesn't offer a toplevel at all: you have to use a different SML, e.g. Moscow ML, for that.

The argument seems similar to the one in which people used to think that the interactive nature of Lisp meant that it was interpreted—long past the point at which all commercial implementations, and several free ones, were compiling to native code. So there's clearly more work for most statically-typed language developers to do, and that work might be more cultural than technical. Maybe the big lesson of dynamic languages is that interactive development has real, tangible, measurable productivity benefits that so far, only O'Caml can reasonably claim to have adopted, and it's going to take some kind of revolution for the rest of the statically-typed world to catch up. That's an argument that I could wholeheartedly agree with.

### Thanks!

Thank you for your insightful response Paul.

My own experience with dynamically typed languages is pretty limited, so it's good to get an opinion from someone who does have experience in that area (even if they weren't the target of the original question ;-)

### Feeling

Of course they feel faster because you get to execute your code even before you have to fix anything you messed up during your previous edit. Along with their typically enormous libraries it may easily feel like you are flying...

Even with good testing we are still used to seeing the occasional runtime errors that we need to fix in both kinds of languages. Therefore, when we see a function trying to pull a string out of an integer, we are used to blame ourselves: "Oh, I passed the wrong variable as the argument" instead of "oh dear byte code compiler, why haven't you warned me of this mismatch" because we know that there is no static type checking to begin with.

So, (a plausible theory is that) every minute spent debugging a dynamically typed program goes in the "programmer made an error" column while every minute spent setting up interfaces, fixing compilation errors, etc. goes in the "evil statically checked language is slowing me down; dwim please, get out of my way, etc. ".

This is not to say one is better (in terms of productivity) than the other in the aggregate, but it is easy to form false impressions going by the feeling of it.

### faster development than what?

Do they really offer faster development...

Faster development than what?

Faster development than C?

Faster development than Java?

Faster development than OCaml?

### You tell me

You tell me. I'm not the one claiming that development in a "dynamic language" is "faster" than development in a "static language". I'm not saying you are making that claim either. But I seem to have heard it from many proponents of "dynamic languages" as a rationale for preferring them to statically typed languages. When they say "faster", to what are they comparing? And are the purported speed benefits really due to the dynamic nature of the language, or are they simply the result of a combination of extensive libraries and a shorter interval between concept and initial coding?

I seem to have heard it

So when you really do hear it, ask which "dynamic language" is faster to develop with than which "static language" for what kind of project!
Oh, and ask to see the project velocity measurements :-)

### Several reasons

First of all, most of the static/dynamic comparisons I've seen (including, it seems, this one) have been comparisons of Java/C# to Python/Ruby, effectively. And I wouldn't be at all surprised if Python/Ruby were faster/easier to develop in than Java/C#.

It's when you throw Ocaml/Haskell into the mix that things become more interesting. Especially as you start realizing a lot of the productivity advantages of Python/Ruby have nothing what so ever to do with their dynamic typing. For example, having an interpreted environment for the language encourages a more interactive design- write a little code, play with it a little to make sure it works, and then move on to the next peice of code. While a compiled language encourages you to develop a larger hunk of code before compiling and testing. The more iterative the implementation is, the more likely you are to find the bugs sooner, and the easier it is to find the bug (find a bug in 20 lines of code 100 times being easier than finding 100 bugs in 2000 lines of code all simultaneous and interacting).

Java/C#/C++ are generally compiled languages, while Python and Ruby are more generally interpreted languages. But this has absolutely nothing to do with their type systems- it is perfectly plausible to write an interpreter for a strictly compile time type checked language, as the Ocaml top level interpreter shows.

There are other aspects that make a language easier to develop in- large available libraries, for example. UI code generators (like Visual Studio and Glade) also help with the illusion of productivity- I can generate megabytes of code in only a few hours with these tools without even touching a keyboard. But these too are not an effect of the type system. But it's often hard to seperate the effectiveness of the language and development environment as a whole from the effectiveness of a specific type system.

### Missouri

The traditional benefits of static typing are already well known (performance, compile-time checks)

Given your enthusiasm for evidence, you can provide evidence that compile-time checks result in fewer errors in the delivered software - can't you?

### Great!

Funny, I've been looking for something like this paper with little luck the last few days.

I'd hate to see any discussion on this degrade into another static vs. dynamic type war though, since I believe there are quite a few nuggets of gold in there.

What interests me about this paper are the ideas on reflection and how to limit or control it. I'd also like to find more information in the area of living or evolving software and various ways to approach the problem of multiple developers working on a dynamic system.

Looking at Squeak, it seems the problem is just as much a social or political problem as it is a technical one. Linux and other file-based systems have loosely agreed upon (but well-understood and psychologically ingrained) standards for sharing, maintaining, and distributing software. It's second nature to organize software as files in subdirectories that are part of a larger (but self-contained) package. The open source CL community does this and I believe Genera did the same. Unfortunately, if you treat software at all multidimensionally (Squeak) then the scheme breaks down. When you can modify program behavior through a GUI, for example, there really is no corresponding source code file. I suppose generating the equivalent of a "patch" is possible, but would be meaningless. There would need to be a way to distinguish between architecture and mere user configuration.

Unless you wish to relegate "architecture" changes to a source code file representation and every other dimension to the user's instance of that architecture. That might be one way of looking at it.

Any pointers would be greatly appreciated.

First, let me wholeheartedly agree that reflection is where the static and dynamic typing communities can most profitably meet. There's a lot of outstanding reading on my pile in that regard, to be honest.

With respect to the upgrade problem, which is a big one for any live system, static or dynamic, I'm going to pull out one of my favorite poster children again :-) and refer to Acute. Most of the attention that has been paid to Acute has been for its efforts in type-safe distributed programming, but this time I'd like to refer to the points on the page that say "dynamic loading and controlled rebinding to local resources" and "versions and version constraints, integrated with type identity." This represents Peter Sewell and colleagues' current effort towards addressing the upgrade problem in a type-safe way, and is as fascinating, and promising, as the focus on type-safety in distributed programming. Of course, they're related, which is why you find them both in the Acute project.

But that obviously doesn't address the question regarding dynamic languages. Units: Cool Modules for HOT Languages remains my primary source regarding module systems for either static or dynamic languages, and I just re-read it to see if they discuss versioning. They don't, but they do discuss dynamic linking, so hopefully there's an open door for future work on the relationship between dynamic linking and versioning.

### Good stuff there, thanks. I b

Good stuff there, thanks. I briefly read through the Acute paper and one of the first things that caught my eye was the versioning. Interesting stuff going on with the hashing to arrive at versions, version constraints, etc. Plenty of issues that I never noticed or thought about as well.

The "Units" paper is interesting as well. I'll have to read both thoroughly when I get some more time. Just the stuff I'm looking for, though :-)

### Magpie?

From the David Unger position statement...

Objects are live, code always runs.
There is no difference between compile-time and run-time. In fact, we use every trick in our book to conceal compilation. The user makes a change, commits it,
and in a fraction of a second, the change takes effect. Some systems, such as Magpie, commit
changes after every keystroke. This immediate response mirrors the real world.

Can anyone point me in the direction of Magpie? All of my searching has so far come up short.

Here. I sort of remember this now. Having worked at Tektronix ca. the time this was going on, I don't really remember much more than that trying to type on this thing was no more (or less) annoying than trying to type on any other syntax directed editor.

The project started out at Tek Labs and some of the folks migrated out to an adjunct of the MDP group at Walker Road to try to merge this into a CASE product (with an SASD-based visual editor, too!). After Tektronix decided they weren't interested in doing CASE tools after all, a lot of these folks got absorbed into Mentor Graphics in their pre-V8.0 ramp-up when MG thought they'd do combined hardware/firmware design tools. Most of them were subsequently laid off early in the first part of the V8.0 semi-collapse when MG decided it was better to finish V8.0 than to try to expand into other areas.

### Tek Labs

Thanks for the link. Being a former Tek employee myself (High Frequency Design), I'm always impressed at the stuff that was dreamed up (but seldom commercialized) by the folks in Building 50.

### Re: Tek Labs

Yes. A lot of interesting things did come out of there - the commercialization of LCD screens (Planar Systems, InFocus, and a couple other LCD-based companies are still here in the PDX area), a good Smalltalk implementation that was the basis of the 4066 AI workstation (it also had a Franz-bsed Lisp subsystem with Flavors), one of the first 1GHz production-grade transistor processes (actually done in Bldg. 59), mask-overlap chip production processes, CRC cards, and - of course - Ward Cunningham and Kent Beck. All-in-all, a phenominal hotbed of research activity for a (relatively) small company. But those were different days - when CEO's wre allowed to fund research. It is sad that our industry finds it difficult to hand over even a pittance to basic research these days.

### Tek Labs

You forgot to mention the 11K scope, which ran Smalltalk for it's user interface ("real time" code was done in C, IIRC). Unfortunately, Tek Labs is no more; though Building 50 still stands.

I'm sitting there now as I type... in C++. :|

Back to work...

### Loose coupling

I just wonder whether objects are the correct form of abstraction for the larger concept of software components. Although most OOP literature tries to paint objects as loosely coupled components that speak to each via messaging, that's not the way they seem to be implemented. Objects generally have to be written either in a common language (or in a common intermediate representation). And the messaging that takes place between them is usually just a function call aimed at a specific module via some VTable, or some form of delegation/forwarding.

Perhaps instead of componetizing at the level of class, what we need is to have a better definition of the line of demarcation of where one application starts and another ends - and that interface between applications and subsytems. Loose coupling betwixt components is the name of the game, and I don't see objects as necessarily being the last word on plug and play components.

Anyhow, just a thought. :-)

### Objects have failed.

Many software algorithms are coupled with more than one types. The mainstream object-oriented approach is that one type is coupled with one set of subroutines. Although this works in some cases, it does not work in others, and that's usually the reason why there is such strong coupling between objects.

### universal loose-coupling?

Do you mean - objects have failed to provide universal loose-coupling?

And if that is what you mean, what programming style is "better" (results in more "loose-coupling")?

### My Opinion...

...is that inclusion polymorphism is wildly overused in practice and results in far too tight coupling in most cases. In C++, for example, the only coupling you have that's tighter than public inheritance is class friendship.

By way of contrast, given a structural type system and ML-style module system, any implementation that implements a given interface will do, and then, with functors, i.e. modules parameterized by other modules, you can achieve even more loose coupling.

Another mechanism that can be useful in some contexts are what are sometimes referred to as "virtual types." An excellent explanation, with several implementations in O'Caml, can be found in On the (un)reality of virtual types. The whole section entitled "The necessary ingredients for decoupling classes" is a must-read, IMHO.

### Static typing does not prohibit change

The paper claims that static typing is the enemy of change. My opinion is that it is not. In fact, the type system is irrelevant to change-ability of a system. An example of this is J2EE applications: although Java has a static type system, J2EE applications can be updated without currently active clients be interrupted. The current clients will continue to see the prev app, while the clients that log in after the new version is uploaded will use the new one.

The problem is that systems are not a live collection of objects that can be easily be swapped in/out. In J2EE for example, the whole web app needs to be uploaded, instead of simply replacing a few classes. The situation is even worse for desktop applications: they need to be recompiled, instead of simply replacing some class with another.

### I don't think that's what they're talking about

I believe that the usual reason for stating static typing as the enemy of change is that to changing the code involves respecifying the types of things. When you refactor Java/C++ you end up having to change a lot of class names and so on, where as in something like Python you just change the variable name. Haskell style type inference more or less makes that a moot issue, but hoping the dynamic languages fanbois to understand complex type theory is obviously stretching. </rant>

### type inferencing is hardly co

type inferencing is hardly complex type theory, at least to the user

### Java is only weakly typed

although Java has a static type system, J2EE applications can be updated without currently active clients be interrupted.

But note that this relies on the fact that at its core, Java is a dynamically checked language. Its type system breaks down as soon as you use dynamic features - it cannot guarantee absence of runtime errors like MethodNotUnderstood. So I don't think that Java is a valid counter-example - it's not properly typed.

### guarantee absence of runtime errors

What language would be needed for a valid counter-example - one that guaranteed absence of divide-by-zero? (Where do you suggest we draw the line?)

(Incidentally are you talking about Java 5, or Java 1.4?)

### There are several possibilities...

The simplest is to just change the type of division:

unsafe_div : number -> number -> number

safe_div : number -> number -> number option


Now, you reflect the fact that division may fail in the type of the function, and so you need to handle all possible divide-by-zero errors before you can claim an expression is of type number.

Secondly, you can augment the type system with dependent types, and then you can write a division function that's only applicable to nonzero divisors.
(Elements of the type Nonzero(d) are witnesses, or proofs, of the fact that d is not zero.)

dependent_div : number -> d : number -> Nonzero(d) -> number


This a rather more heavyweight solution, but you can scale it to state nearly arbitrary conditions.

In fact, theorem provering environments like Coq, HOL or Twelf use a type language of dependent types as their mathematical metalanguage, and use lambda-terms as the representations of proofs.

### would it be possible to have

would it be possible to have

not_zero : number -> nznumber option
safe_div : number -> nznumber -> number

a little clumsy but straightforward and fully static

### Yeah, that would work, too.

In general, you can encode a surprisingly large number of static constraints in a polymorphic type system. The question is whether the additional work is worth the effort....

### The other question...

The question is whether the additional work is worth the effort...

Yes that is a really important question :-)

The other (ignorant) question is can we have the "surprisingly large number of static constraints" in the same polymorphic type system at the same time. I always seem to read about a specific kind of constraint, and then wonder about bad feature interaction.

### This is Actually a Big Chunk of the Value of Type Theory

See Figure 1 of Building Extensible Compilers in a Formal Framework, which is basically a graph, with the horizontal axis representing stages in compilation of the polymorphic lambda calculus (type inference, type checking, CPS conversion, closure conversion, and code generation) and the vertical axis representing various extensions to the language (from most basic to most complex going up: boolean values, arithmetic, integers, arrays, recursive functions). The graph clarifies that not all extensions affect all stages, e.g. recursive functions don't affect CPS conversion, but are the only extension that affects closure conversion. So here we have a principled mechanism for adding features to a language while avoiding bad interactions.

Another take on this issue that unifies the traditional Action Semantics, or Structural Operational Semantics, approach to defining a language, with monads, which do an excellent job of isolating things like side-effects, is A Modular Monadic Action Semantics, previously discussed on LtU here, in which there's a priceless quote from Graydon Hoare about monads: "Monads are one of those things like pointers or closures. You bang your head against them for a year or two, then one day your brain's resistance to the concept just breaks, and in its new broken state they make perfect sense."

### Where to draw the line

What language would be needed for a valid counter-example - one that guaranteed absence of divide-by-zero?

No, just any sane language with a type system that does not pretend to give guarantees it cannot hold. No practical language I know of lulls you into thinking that there cannot be division by 0. Java however, does something like that.

When you use some object of class C, and call a method m on it, then the compiler checks that C has m. Unfortunately, this provides no guarantee that this is still the case at runtime, because the JVM performs no structural checks on classes, only names are compared.

In other words, when Java's type system derives that something has class C, and class C has method m, then this analysis is just good guess and hope. The actual situation at runtime does not need to bear any relation to it.

(Where do you suggest we draw the line?)
I draw the line where the type system holds what it promises - however little that may be.
(Incidentally are you talking about Java 5, or Java 1.4?)

Both. As soon as you do real open-world programming, where you load or receive stuff at runtime that is not known statically (e.g. via RMI etc), all bets are off regarding meaningfulness of static types, in all versions of Java.

Of course, in Java 5 it got worse: you can now even break the type system in a closed program without using any dynamic features.

### Castigating Java

Unfortunately, this provides no guarantee that this is still the case at runtime, because the JVM performs no structural checks on classes, only names are compared.

You mean the JVM checks that each actual parameter has a class that is assignable to a class of the corresponding formal parameter (which of course involves not only names but also classloaders), right?

### Yes

Yes, I probably should have mentioned class loaders.

### To be precise

The third edition of JLS (read Java 5) changed the process of Runtime Evaluation of Method Invocation a bit (e.g., they added "erasure" to the picture).

Excuse me for being pedantic or bringing JLS into discussion, but I just wanted to demonstrate how an absence of a formal model makes it very difficult to discuss even the most fundamental parts of the PL (such as invocation).

### Static types can help change

The paper claims that static typing is the enemy of change. My opinion is that it is not. In fact, the type system is irrelevant to change-ability of a system.

Oh, on the contrary, it is relevant--and helpful. When I change a function signature in a statically typed language, the compiler finds the callers for me. That's a huge benefit.

### assuming you need to find the callers

And the knee-jerk response would be - we only need to find the callers because we need to fix-up source-code constraints, which aren't present in a dynamically typed language.

### You Might...

... if, e.g. the function's arity has changed, or a parameter has changed from being a number to being a string or symbol and your language doesn't do conversions automatically. Surely you aren't actually suggesting that all dynamic languages need to "just work" is the name of the function to call.

### I'm suggesting

I'm suggesting that one of the reasons we are stuck on this merry-go-round is - we exaggerate the benefits of doing things one way, and ignore the extent to which the same task can be accomplished by other means.

Sometimes I wonder if this dispute continues simply because no-one has demonstrated an interesting cost-benefit difference.

### In Economic Terms...

...I absolutely agree. As I've written elsewhere, however, even being the free-market nuthatch that I am, I can't help but wonder if the market doesn't pursue tools like O'Caml because they don't know that they're possible, let alone that they exist. Certainly my experience in my current C++ job suggests a severe lack of familarity with the alternatives.

But again, this isn't a forum in which we tend to talk about what the market decides, although there have been more postings that I need to follow up on regarding issues in software economics. I don't really want it to become that, otherwise I'd be reading the C++ User's Journal and/or Java Developer. I think several of us are making a good faith effort to talk through what our best understanding of the pluses and minuses of a variety of computer science and software engineering tools are while avoiding the bandwagon effects that tend to distort these valuation processes in other contexts.

With that said, I remain curious as to your answer to my question. Smalltalk has its refactoring browser. Lisp development environments are famous for their ability to find definitions (meta-point) and call sites. Why bother if it isn't necessary? I actually think you just made a better case for static typing than dynamic typing!

### groan

Why bother if it isn't necessary?

As you surely know, Smalltalk and Lisp provide tools for browsing code, and being able to browse code effectively helps with many different tasks (not just this one).

And of course the point of the refactoring browser is to automate (which is why it's caught on in the Java world).

(I'm struggling to make this question fit with "making a good faith effort to talk through what our best understanding...")

### Fair Enough...

Isaac Gouy: (I'm struggling to make this question fit with "making a good faith effort to talk through what our best understanding...")

I have exactly the same reaction to your entire "Assuming that it's necessary..." post. Here's the quote; please let me know if I've missed relevant context, etc. etc. etc. Emphasis is mine:

Isaac: We only need to find the callers because we need to fix-up source-code constraints, which aren't present in a dynamically typed language.

You still haven't answered how it is that dynamically-typed languages are magically immune to the need to change call-sites. That's because they aren't; arity changes and/or argument-type changes affect dynamic languages just as surely as they do static languages. I was trying to give you the benefit of the doubt as to what you actually meant, but suggesting that the refactoring browsers don't exist, at least in part, to address the need to change call-sites in dynamic languages is disingenuous as best, and dishonest at worst.

### Ouch

Why do discussions on this topic always end up beocming so heated? Personally, I don't think either of you is trying to be dishonest. In fact I think we stumbled upon a nice issue.

Refactoring browsers and the like show that (some) static analysis is possible in d.t languages (so s.t. isn't required for static analysis). They also show that such static analysis can be useful and help productivity (so d.t doesn't remove the need for static analysis)

Obviously the real question is whether the static properties in each of these cases (type sysyems vs. refactoring browsers and ilk) are the same, what are the cost/benefits etc.

There are reasons why having the language dictate the static guarantees is important (i.e., tools know less because the language guarantees less), but I am not sure how important this is important as regards refactoring browsers and similar tools.

### *sigh*

Ehud Lamm: Why do discussions on this topic always end up beocming so heated?

Obviously, I can't speak for Isaac, but I've noticed that my ire tends to get raised when specific questions come up and are either side-stepped or responded to with inaccuracies that a single Google search, nevermind any direct experience with the technologies in question, easily overcomes. In other words, there comes a point at which it becomes very difficult to maintain my belief that my interlocutor (and I really do mean generally; I'm not pointing a finger at Isaac here) is indeed being intellectually honest.

And to me, that's what this is all about. As I wrote a short while ago, if all I were interested in were the socioeconomic factors in tool selection, I'd be reading the C++ Users' Journal and/or Java Developer, both of which are fine publications for their respective markets. I don't know what else to say. I've written repeatedly that there's a lot of stuff worth discussing around reflection and introspection, and I just tried, in my last post, to suggest that maybe a major source of difficulty is that there aren't enough counterexamples in statically-typed languages to overcome the myth that interactive development is the sole province of the dynamic language. So it seems to me that there's plenty of constructive ground to cover once we dispell some persistent myths, which seem to revolve around relative productivity and expressiveness.

Having said that, nonsense like the "Static Typing When Possible, Dynamic Typing When Necessary" paper, at least as it stands, and "We only need to find the callers because we need to fix-up source-code constraints, which aren't present in a dynamically typed language" aren't going to get a let's-make-nice response from me. They're not matters of differing opinion; they're wrong, and the people who write them are in need of further education, that's all.

On Edit: OK, I just re-read the thread from the top, and I see way too many instances of what seem like good interactions ending up at leaves, and more heat than light ending up in neverending nodes, no doubt prompting Ehud's question, with me as the most voluminous contributor. I think that's a clear sign that I need to take a break for a while. I'll see you all later, and thanks for your patience.

### What's with the name calling?

missed relevant context

"And the knee-jerk response would be -"

Surely you aren't actually suggesting that all dynamic languages need to "just work" is the name of the function to call.

Sorry, no question mark, seemed like a rhetorical question to me. No, I'm not suggesting anything so blatantly stupid.

suggesting that the refactoring browsers don't exist, at least in part, to address the need to change call-sites in dynamic languages is disingenuous as best, and dishonest at worst

I said nothing of the kind!

The refactoring browser exists to automate all kinds of source code changes (including call-site changes).

My feeling is that you have chosen to interpret whatever ambiguity exists in a statement as the worse kind of mendacious spin, and then attack the writer for something they didn't affirmatively claim.

### ...and multi-user development

Eclipse finds callers for me and changes them. Of course, this is hardly news for Smalltalk users.

But when Eclipse refactoring goes awry (or when merging changes with CVS), static type checking helps a lot.

### Does it help to say that the

Does it help to say that the only difference between dynamic and static is partial evaluation. In other words we decide that certain variables such as configuration information will not change and â€œcompile it inâ€ instead of re-evaluating those variables every time we run the code. The real problem is deciding what we can â€œcompile inâ€ and what to leave open. If things are changing or may need to change it can be tough to push the compile button. However if we really understand a system it should be possible to express all the possibilities in a dynamic language an write in the one we want to use.

### I don't think that partial ev

I don't think that partial evaluation fully encapsulates the transition from dynamic to static. Performance is only part of the issue with static type safety; part of it is just knowing that certain things won't go wrong- ever. With dynamic typing you are "safe" at some level, but there are no guarantees that (in those same ways) the application won't fail.
Obviously statically typed programs can still be incorrect, but there are fewer ways in which they can be incorrect.
This is the essence of what dynamic typing, by definition, cannot do: provide static guarantees. The value of this is debatable. Sometimes dynamic guarantees are enough. As long as a program is stopped from trashing my data, that's good enough for me. But sometimes we a) want performance and b) want a certain level of reliability. Both of these can be increased (but of course, not guaranteed) through static checking.
Also compilation and quick turnaround times aren't enemies, as any lisper will attest to.

### cannot do - don't try to do

By definition, it's what dynamically type checked languages don't try to do - static type checking.

Everytime I see the this dynamically checked language versus statically checked language
dichotomy, I have to remind myself that the statically checked language may also use dynamic type checks.

### Yep

That's true, unless it employs type erasure across the board. But at least some static typing aficionados, like myself, seriously question the value of this, not so much because we want things like unsafe covariance (something that, contrary to "Static Typing Where Possible, Dynamic Typing When Needed" I emphatically don't want), but because it's difficult to conceive of a solution to the question of reflection/introspection in a fully type-erased system.

### Yep

unless it employs

Hence may also use.

question the value of this

this being "type erasure"?

I want ... unsafe covariance

Section 2.5 seemed to translate as given a choice between - unsafe covariance, contagious parametric types, variance parameters with wildcards - we choose unsafe covariance.

fully type-erased system

Is C# fully type-erased?

### More Good Questions

Isaac Gouy: Hence may also use.

I may have misunderstood you not to be referring to type erasure in your "may also use," in which case I apologize.

Isaac: this being "type erasure"?

Yes, that's what I meant; thanks for clarifying!

Isaac: Section 2.5 seemed to translate as given a choice between - unsafe covariance, contagious parametric types, variance parameters with wildcards - we choose unsafe covariance.

I guess I'm not satisfied that those choices are:

• comprehensive
• mutually exclusive
• necessarily present in any given type system
Or that, even if I'm wrong on all three points, unsafe covariance is the most preferable choice in most cases. Of course, that argument won't be at all compelling to dynamic typing fans. :-)

Isaac: Is C# fully type-erased?

I don't know; I'm not a C# programmer. However, Java obviously isn't, so I doubt that C# is.

### Excellent Post!

I especially like the last comment: "Migration isn't a real issue. Java Generics are horribly broken, you idiot." Clearly written by someone who's not had any Java jobs in a company of more than about fifteen people...

Show this one to all the people who don't think that adding or changing a type system in an existing language is hard to do.

### Not to be picky, I note that

Not to be picky, I note that I never mentioned language. I'm not under the illusion that static & dynamic typing are mutually exclusive. I was just saying that there is something that dynamic typing can't do that static typing can.

### acknowledged

I never mentioned language

Sorry if I gave the impression you had - I was just taking the opportunity express something that I think causes much confusion for folk who don't read LtU :-)