Weak vs. strong typing

So... how big can dynamically-typed systems get? Do static type systems really matter? I'd love to know the answer to this.

We have some big systems at Amazon, and most of them seem to use strong static typing; at least the ones I know about do. Is that a requirement, or could we just as well have done them in Perl, Ruby, Lisp, Smalltalk?

Read more about it here.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Serious or satire?

Maybe I'm just tired but I can't work out if that article serious or satire?

*sigh*

It's just the "C++'s/Java's type systems are lame, therefore static typing is lame" argument again, despite mentioning Haskell and O'Caml by name and a brief aside about type inference. Unfortunately, it's very hard to get past that point with anyone who hasn't written a sizable program in something like Haskell or O'Caml, and I imagine that that's because of the kernel of truth to the point that such languages have a greater up-front learning curve than the Pythons and the Rubies of this world.

He gets close to asking the right question when he observes that we static fiends write fewer unit tests. The right follow-up would be: and did the code work when it was written, or didn't it? Were the missing unit tests needed, or weren't they? In a relatively lame language like Java or C++, I have no trouble believing that you need about as many tests as you do in Python or Ruby. In a more powerful language such as Haskell or O'Caml, it's been my experience that this isn't the case, but it would be nice to hear about from someone without my rather obvious, and admitted, biases. :-)

I agree

Disclaimer: I'm not an advocate of either static or dynamic typing. I think they both have their merits: I simply haven't made up my mind yet. ;-)

From the article:

Static type systems reduce runtime errors and increase data integrity, so it's easy to be lulled into thinking that if your system passes compilation and runs, then it's basically bug-free. People who use languages with strong static type systems seem to do a lot less unit testing. Maybe it's my imagination, though.

What??? If I'm writing Scheme code, and I have to write 20% more unit tests than Paul Snively (who is using one of his favorites: OCaml), because I have to check all of my types, that makes dynamically typed languages better how?

I can use that sort of reasoning as well: A language with manual memory management is better because it makes the programmer more aware of his/her program's memory usage. Let's forget about the reduced memory errors and data integrity! A C programmer has to be more careful and write more test cases than one using a modern garbage collected language, just to make sure he/she frees all allocated memory. According to recent surveys of computer scientists, being careful and writing test cases are good! That must mean that C is a much better language than any garbage collected language, as far as memory management is concerned.

Granted, I can sort of see his point, but as someone who has programmed (and I'm fairly inexperenced, read: haven't had tons of painful exprences debugging code) in static languages (as well as dynamic languages), I can say, that I gain no real sense of security from my code compiling, and I think unit testing, regardless of the type system, is a must in programming.

Yes, the proper line of reasoning would have been: static type systems allow the user to write less test cases, because many of the test cases that would need to be written in a dynamically typed language are already written for you. No need to resort to weird reasoning to make that a benefit of dynamic type systems.

State

Am curious - have you tried languages that bar (or make exceedingly inconvient) implicit state? I find that the bulk of my bugs in BASIC/C/Java/C++/Python come from unwarranted assumptions about the programming state. There's this large set of data that changes during program execution, and I have to keep track of these changes in my head to write a correct program.

Functional languages eliminate this whole category of bugs, so bugs due to type errors become relatively more important. If 70% of bugs are due to state, 20% are due to type errors, and 10% are due to logical errors, then static typing only gets you a 20% benefit in a procedural/OO language. But in a functional language, with 70% of bugs eliminated already, static typing can give you a 66% benefit. And the two together can literally lead to a 10-fold reduction in bugs.

Agreed. But:

I find that the bulk of my bugs in BASIC/C/Java/C++/Python come from unwarranted assumptions about the programming state.

Of course what you say is true. But functional programming languages have some drawbacks as well:

  1. performance issues. No matter how clever a functional programming language compiler is, it seems that imperative programming wins (at least for now).
  2. programming difficulty. Many applications have some sort of object model inside them that the programmers use to make the program work. But it is very difficult to do an object model in functional programming languages: a) for every change in the model, the whole tree needs to be copied. It is not only difficult and time consuming for the average programmer to write all possible state duplicating functions, but it is difficult to debug as well (for example, if I have a table with 10,000 records, this table has to be copied in debug mode) b) Modelling everything with lists is not always desirable; c) making new monads is a difficult thing to do, as monads are generally hard to digest, d) the object model has to be passed as a parameter in every related function.

The problem of unwarranted assumptions is due to programmers not being able to express those assumptions in the code so as that the assumptions are mechanically validated by the compiler. Those assumptions are part of the type system as well; ignoring them is a mistake.

I have posted here in LtU many times my ideas about how an imperative programming language could assist in eliminating bugs by allowing the programmer to state his assumptions in the code, but I am no mathematician, an academic or someone famous, and I am always ignored. So let me give it one more try.

I think that it is possible to statically validate a programmer's assumptions if:

  • those assumptions are expressed in the form of declarations.
  • the compiler uses sets to track possible values produced by a piece of code.

There is no need to prove that a piece of code terminates in order to make sure some state-related bugs are caught. All that is needed is to prove that the possible set of values produced by a piece of code is compatible with the set of values the programmer assumes.

This simple concept leads to a simple play between sets during compile time: the compiler must check if the set of values the programmer assumes intersects by the set of values that are possible by a piece of code. If there is no intersection, then the compiler must declare an error of assumptions violated.

I would go as far as saying that almost everything about a type system can be reduced down to playing with sets during compile time. Of course I have no paper to back up this claim, and I am no mathematician or willing to make one, so if anyone reads this and disagrees, first try to run the following example in your head:

void open(File *file) {
    assume file != NULL;
    assume file->handle == NULL;
    assume file->name != NULL;    
    file->handle = fopen(file->name, file->mode);
}

void close(File *file) {
    assume file != NULL;
    assume file->handle != NULL;
    if (fclose(file->handle) == 0) file->handle = NULL;
}

int main() {
    File *file = CreateFile("myfile.txt");
    open(file);
    close(file);
}

Please try to eliminate calls to open, close or CreateFile or change the order that they are called and see if this scheme works...

what an annoying comment

performance issues

You're absolutely right. Because imperative languages are inherently more performant than functional ones, MLton, GHC, Ocaml and CMUCL rank higher than Java, Pascal and all of Perl, Python, Ruby, PHP and their dynamic brethren in the shootout at http://shootout.alioth.debian.org Yes, a C compiler is still on top. C is a nightmare for other reasons.

the whole tree needs to be copied

Since data is immutable, no copy of unchanged parts is needed. You make what is an O(log n) cost sound like an O(n) cost. The additional logarithmic factor is often a small price to pay for a progam that actually works and is understandable.

Modelling everything with lists is not always desirable

But modelling everything with arrays is even less desirable. There are more advanced data structures, and using them in functional languages is generally a lot easier than in imperative ones.

making new monads is a difficult thing to do

Oh, come on, how difficult can overloading "operator ;" be? There is no magic to monads, they are just particularly useful, which brings me to the last point:

object model has to be passed as a parameter in every related function

Duh, use the state monad!

Well, sorry for the rant, but this sort of comment simply bugs be, because it is wrong and still sounds somewhat right to the casual reader.

As for unwarranted assumptions, don't think of null pointers, think of complex invariants. For example, I had to implement a trie, which I did in a functional style in Haskell. It worked on the first run. However, I actually needed a trie with additional links (that would be a threaded trie, I think), and mutating that is only possible in an imperative style. Still using Haskell, I implemented it, and it broke. I had two procedures calling each other which stepped on each others toes.

The invariant in this case is "Nobody else is mutating the data structure at this time", but how do you express that if you actually have to call "somebody else" to do part of your work? So the lesson is, shared mutable state is inherently difficult. If you have it when you don't need it, you make your life difficult.

State monad

Duh, use the state monad!

Right, I found that as soon as I started using the state monad, the frequency of bugs went way up. Suddenly I had to keep track of all that hidden state again.

I think the advantage of Haskell's system is that it forces you to think before you use state, and the mutable state is all explicit (at least if use the State monad instead of IORef/STRef). It doesn't let you hide state the way OOP does.

The flip side of this is that while the program ran correctly the first time, it took 5 times longer to write than an equivalent Python or Scheme program. So much of the time savings in debugging are eaten up wrestling with the typechecker. I'm hoping that this gets shorter now that I have a bit more experience, but it's definitely true that Haskell makes you do more up-front thinking to pay for the savings in debugging.

Perhaps the best approach is an impure functional language a la Ocaml. Then you could prototype your system with state and then gradually eliminate it as you work towards a production system.

Thanks for replying.

Yes, a C compiler is still on top.

Well, in real world apps the small difference between C and FPLs will be amplified, so the argument about performance issues holds.

C is a nightmare for other reasons.

Agreed.

Since data is immutable, no copy of unchanged parts is needed.

Well, some part will be needing copying. If that part has lots of data, and the application has performance constraints, then it matters.

The additional logarithmic factor is often a small price to pay for a progam that actually works and is understandable.

Yes, but how about the price of making all those functions that re-assemble the object model when a part of it changes?

But modelling everything with arrays is even less desirable.

Of course I never said that. But sometimes arrays are preferrable to lists.

There is no magic to monads, they are just particularly useful

There are also hard to grasp.

Duh, use the state monad!

It is not always obvious how to use monads - especially if the state changes are complex.

As for unwarranted assumptions, don't think of null pointers, think of complex invariants

I think that any complex invariant can be reduced to set of primitive ones.

and mutating that is only possible in an imperative style

Is there such a case? is there a case that something of the imperative world can not be expressed in a functional way?

I had two procedures calling each other which stepped on each others toes. The invariant in this case is "Nobody else is mutating the data structure at this time", but how do you express that if you actually have to call "somebody else" to do part of your work?

Let's say that we have procedure A and procedure B and that only one procedure at a time can access a node. A state enumeration with two values {A, B} would solve the problem: a node could be in one state, but not in the other.

Please

Yes, a C compiler is still on top.
Well, in real world apps the small difference between C and FPLs will be amplified, so the argument about performance issues holds.

Please, stop trolling on this forum. First you bring the good-old "FP has performances impact" while today dynamic languages (PHP,Perl,Ruby,Python) are the most slow languages of the whole computer era. And then you start talking about real world application. Sorry but I'm living in the real world, the one where OCaml rocks.

On the issue of trolling

If you don't agree with his statement, please just state why. Accusing someone of trolling simply because you do not like what he is saying is not appropriate.

As for his statement, he was comparing C with functional programming languages. The fact that “dynamic languages (PHP,Perl,Ruby,Python) are the most slow languages” does not have anything to do with the comparison between C and FPLs.

?

And how is all this related to weak vs. strong typing ???

Troll?

I am not sure this is really a troll. Perhaps it is, but from what I saw the poster may have been sincere in all his questions. After recent events we changed our policy and are actively kicking out trolls. But posters are assumed innocent until proven guilty, and like I said I am not convinced of wrong doing in this case.

Let me just say that the topic of weak vs. strong typing is just so silly that most discussions look like parodies or trolls. That's why I always try to keep away from threads about this topic, unless they start with a remarkable new idea or approach (and it's been a couple of years since I heard any new angle on this topic). It is just a silly argument, and when connected to the silly term dynamic languages (which I know some smart people here really like), there's no getting out of the conceptual mess.

That's, of course, my personal opinion. It is a result of long time involvement in PL discussions. Others are free to spend time on this and this might even prove fruitful, who can tell? My advice is to concentrate on other things, and direct newcomers to our previous threads and/or books.

Sorry

My sincere apologies to the original poster if he didn't intended to troll. I should stick to your policy Ehud, it's wiser.

what an annoying post

Oh, come on, how difficult can overloading "operator ;" be?

This is a joke, right?

I should explain-- it seems to me bleedin' obvious that monads are the least intuitive fundamental feature of any current programming language. There are hundreds, if not trillions, of monad tutorials on the web, each one written in a fit of inspiration by a college student unsatisfied with the 9 or 12 he struggled through before the light went on. In fact I'd wager one can easily find pairs of monad tutorials that appear at first glance to be describing completely different things.

The analogy to operator ; is a little insightful, but it doesn't go very deep. We're really talking about higher-order functions operating on deeply nested lambdas here--and thanks to type inference, the name of the monad that affects a given do (your only clue as to in what bizarre way the compiler is going to transmogrify this code) doesn't appear anywhere.

I wrote my first monad today (a MonadPlus, no less--go me), after reading about them off and on for a week. It's a toy; it likely has no practical use. I have no idea if it satisfies the algebraic properties that the tutorials confidently demand (without ever enumerating); I have vague apprehensions that it doesn't, and if I write a more complex test case, the compiler might accidentally prove that 0 = 1 and set my office on fire.

I don't hate monads, honestly. I think they're interesting. But at the risk of branding myself an idiot, they aren't exactly easy.

Not exactly easy

I'd concurr; one might be able to use them quickly-ish (as in, just thinking of them as printf), but to really understand them is another (not impossible, just another) step that one doesn't tend to have to do with things like making a new operator. I think.

Monads

Monads do have a uses; it sounds like the tutorials you have read haven't given any, though. For example, you could implement a parser monad designed to properly compose parsers (see this paper for example). Monads and aspect oriented programming are also somewhat similar, so monads could be used for some things AOP is used for. Also, out of curiosity, what made you think they had no uses? Have you looked at what people were using them for?

I didn't mean that monads

I didn't mean that monads have no practical use. I meant that the specific monad I wrote on that day had no practical use. (I was just writing some code to make sure I understood what I'd read about monads so far.)

The parser article is cool; it's one of the tutorials I'd been working through, probably the best of them.

Sets ?

I would go as far as saying that almost everything about a type system can be reduced down to playing with sets during compile time. Of course I have no paper to back up this claim, and I am no mathematician or willing to make one, so if anyone reads this and disagrees, first try to run the following example in your head:

Before that, could you please explain what you mean by "playing with sets" ?

playing with sets:

Given set A as the set of values produced by assumptions and P the set of possible values produced by a piece of code, P must be a subset of A. If it is not, then the assumptions are violated. In other words, a piece of code produces one or more values that violates the assumptions of the programmer.

A compiler could assemble those sets by observing the source code. The A set (i.e. the assumptions) could also come with libraries.

So, if I understand you correctly....

You're saying that the language should be able to somehow note that even though a function receives a datatype as a parameter, that it's actually a partial function? Perhaps that could be done via making your language have "persnickety" datatypes, such as building the type of Ints from Pos, Neg and Zero datatypes or somesuch, so you can divide by Pos or Neg but if you try to divide by Int you have to use the Maybe monad?

Yes.

You're saying that the language should be able to somehow note that even though a function receives a datatype as a parameter, that it's actually a partial function?

Yes. The real problem with modern programming languages is that their type systems do not capture the requirements that the programmer has in his/her head (i.e. the assumptions). For example, a function that expects a string can also receive NULL, but the function never indended to be used with null strings.

Refinement Types meet Effect Systems

It looks like a mix between effect systems and refinement types. A paper with refined types was mentioned recently.

JML

...an imperative programming language could assist in eliminating bugs by allowing the programmer to state his assumptions in the code...

You might want to take a look at the JML toolset for Java (stating assumptions in a design-by-contract style), and the associated ESC/Java extended static checker (checking whatever assumptions can be statically checked). It is, AFAIK, possible to state assumptions in JML that cannot be statically checked by ESC. OTOH, JML also integrates with jUnit, so unit testing could presumably be used to provide some confidence of the validity of the assumptions that cannot be statically checked.

Stating assumptions

...my ideas about how an imperative programming language could assist in eliminating bugs by allowing the programmer to state his assumptions in the code...

I think it's already been mentioned, but this is already available for a lot of imperative languages. For C there is Splint, for Java there is JML with ESC/Java2, for Ada there is SPARK, for Eiffel there is, well, Eiffel.

If you haven't encountered these yet then I suggest you take some time and acquaint yourself with them. For an easy start there's a quick description of what JML can do for you on Kuro5hin

Thank's a lot, I did not know these existed.

I will study them as soon as I can.

Re: JML

JML is under active development; the mailing list (jmlspecs-interest@lists.sourceforge.net) has traffic pretty much every day. While I haven't used it yet (verily, I sucketh), the culture around it sounds quite sensible to me so far.

hmm...

I have posted here in LtU many times my ideas about how an imperative programming language could assist in eliminating bugs by allowing the programmer to state his assumptions in the code, but I am no mathematician, an academic or someone famous, and I am always ignored.

It makes me sad that you have this perception. I really do think that LtU is quite open to contributions from people with widely varying backgrounds and expertise. And very, very few of us are famous. I don't think that your ideas have been consistently ignored, but in any case I believe that people on LtU are very capable of considering ideas on their own merits, regardless of the prestige or expertise of the poster.

Being ignored

but I am no mathematician, an academic or someone famous, and I am always ignored.

I think you have unrealistic expectations. People will consider most ideas that posters provide reasonable arguments for, regardless of fame. This works the other way around too: Even Erik Meijer, who is famous and an academic, was flamed when he published a paper (no less) about "weak typing"...

What readers will not do is drop all their interests and develop your ideas for you. Naturally, you are expected to do this yourself. Someone may be persuaded, but usually people wait for very good arguments before replacing knowledge and beliefs they developed after long and careful study. They will discuss your ideas, but will not accept them as being all that remarkable before seeing arguments. That's the way things are unless one is extremely famous, and even then the burden of proof is on you, if anyone objects to your ideas (and eventually someone always will).

So we will probably discuss most posts, hopefully in a respectfull and professional manner, but other than that life will continue as we know it. That's the reasonable thing to do.

By the way, annotating and contracts and proof assitants are all very well known approaches, so one is expected to have something new to say. If what this new idea is isn't made clear, people will send you to study the field and explain your ideas better by puting them in context. This will even happen if you are an academic submitting a paper for review.

That's how the game is played. The playing field isn't level, but it isn't as crooked as you might imagine it to be.

Even if extremely famous...

It may be worth mentioning that even Wadler's and Sergei Dmitriev's latest ideas haven't gotten more than a "Hmm, isn't that interesting?" even though they're solving problems that have long been of interest to the programming language community and have very impressive track records.

Basically no ideas are greeted with an "Omigod! That's really cool! I shall drop everything I'm doing to immediately work on it!" About the only examples I can think of are the discovery of parity nonconservation in weak interactions (physics) and the double helix (biology). But structured programming was generally considered ludicrous when it was proposed, OOP took 10+ years to become popular, Guy Steele initially dismissed Java as a rather uninteresting toy, the Macintosh was a commercial failure until the development of desktop publishing, and even the web was greeted with wait-and-see skepticism for a good year or two.

Maybe?

Am curious - have you tried languages that bar (or make exceedingly inconvient) implicit state?

Well, I've programmed in Scheme, and that discourages the use of implicit state, but, of course, it's dynamically typed! I've been meaning to pick up Haskell, because I agree with your sentiments regarding the source of bugs. At work I program in C#. I try to avoid assignment and mutable state as much as possible. I have found that most of the bugs that are very difficult to find are rooted in implicit state (i.e. I’ve called two methods in the wrong order, even though from looking at the code you can’t tell that the order should matter---a direct result of the methods using implicit state). Generally, my most obvious bugs are the bugs in my reasoning. The hard ones to find are hidden in code that looks like it should work, but doesn’t due to implicit state and/or type errors (if I’m using a dynamically typed language).

From a longtime static/dynamic fence-sitter...

I just picked up Haskell, and my experience has been that for the most part, once the code passes the typechecker it "just works". Yes, there are some bugs, but they usually seem to be obvious mistakes or misunderstandings about how Haskell works. I had one bug that was because I misunderstood lazy evaluation and monads, thinking I could read all input, process it as a list, and spit it out, letting lazy evaluation interleave the IO actions. I had several others due to non-exhaustive pattern matches (is there a switch to get GHC to warn about that?), and one or two that came from destructuring a complex pattern wrong. Most of the rest were typos in the output, simple clerical mistakes.

Granted, I'm working on a problem domain that isn't terribly complex (a Scheme interpreter). I suspect it might be trickier if I had to do a heavily concurrent network app, or a business-data webapp. But in general, I've found that Haskell has eliminated almost all sources of bugs. Since all state is explicit, I don't need to worry about variables having the wrong value when referenced. Since evaluation order is explicit, I don't need to worry about statement-order dependencies. Since it's statically typed, I don't need to worry about runtime type errors. I'm finding that the bulk of my mistakes have been syntax errors and compile-time type errors.

I'd be very eager to see a Haskell IDE that flags syntax and type errors as you type, a la Eclipse or Netbeans. I've found that those IDEs have virtually eliminated compile-time errors from my Java code (since they're all flagged immediately). Since Haskell seems to eliminate the bulk of run-time errors, it seems like putting the two together could lead to virtually bug-free code.

my experience too

Unfortunately, I found the mostly undeveloped record/object half of things makes Haskell nearly unuseable for that "business-data webapp".

Minor OT

"is there a switch to get GHC to warn about that?"

Yes, but I forget which, check the documentation or just turn on all warnings.

Did the code work when it was written, or didn't it?

Is not really a sufficient question (necessary, maybe, but not sufficient). A set of questions more closer to being sufficient goes something like this: Is the code easy to understand? Is the code closed for modification? Can the code be extended without modifying it? Are bugs added by modifying the code caught before the code goes to production?

Getting something to work "when it was written" is really the easy part. Based on my experience, I think that Michael Feathers has the right definition of legacy code: Working Effectively with Legacy Code.

Test quality

He gets close to asking the right question when he observes that we static fiends write fewer unit tests. The right follow-up would be: and did the code work when it was written, or didn't it? Were the missing unit tests needed, or weren't they?

What about measuring quality of tests? And no, it's not that hard to use different coverage metrices. After the programmers community has become aware about the worth of unit-tests ( which happens only recently ) this might be the next logical step in software quality improvement. As an additional benefit it might help reducing poinless guesses and discussions.

Kay

first time

"We have some big systems at Amazon, and most of them seem to use strong static typing"

I always heard at least the web interface to Amazon was one of the best-known cases for Perl...

This got me seriously confused ...

... until I read the note at the end that he knows about the differences between static and strong, and dynamic and weak typing. This note maybe should have been at the top. I'm not sure if 'poetic effect' can justify calling Lisp a weakly-typed language. Maybe I should have written "all errors are just for poetic effect" under my examinations at university.

"You're stuck rearranging

"You're stuck rearranging your code randomly until it gets past the compiler."

Ah, so if the compiler is telling you that your code does not make sense and cannot possibly work, you want it to compile the nonsensical code anyway so can see with your own eyes that it still does not make sense when compiled?

In my world, programmers think before they write code. After this sentence I just couldn't bear reading the remainder.

Makes no sense

That whole part of the article left me unsure it was serious or not.

Also, static type errors (aka compiler errors) are especially hard for language learners to figure out, because the poor program hasn't had a chance to run yet. You can't even use printf-debugging to see what's going wrong. You're stuck rearranging your code randomly until it gets past the compiler.

I don't see how he can write that and believe it makes sense. He starts with static typing prevents certain wrong programs from even running, you can't use printf to debug it and that's bad. Ignoring the fact the compiler tells you what's wrong. Ok, some compilers generate cryptic error messages but most tell you what's wrong and where.

Part of learning a language is learning to use it's tools and understand what they're telling you. The compiler is just telling you there's an error and it isn't valid in that language. Doesn't matter which type scheme is used, it's the same.

Equating compiler errors with static type errors is silly. You get compiler errors with dynamic languages too.

Why Dynamic v. Static Typing Won't Go Away

This is not a statement based on rigorous research, but merely an opinion. I've read quite a few DvS type debates, and I can't say that I am ever surprised when a new one arises. There is a reason why the debate never dies, I think: for as much as we would prefer this not to be the case, the question of dynamic as opposed to static typing is one of style.

We all like to trot out our assertions and one study or another, but the issue just won't die. When people are quizzed about why they like one or the other system, too, it always seems to come down to comfort. Some people are more comfortable knowing that the compiler is watching out for their type mismatches, and some people enjoy the apparent freedom of a dynamically typed environment. I, personally, have a hard time believing anyone is ever right in these debates -- insofar as they attempt to extend their own feelings and beliefs on type systems onto others.

(I don't associate weak v. strong with dynamic v. static, because as I've always understood it, you can have a strong dynamic type system or a strong static system, as long as the values you initially sign to a variable in either case do not change in type over the life cycle of that variable. Wouldn't a weak static system be weird? -- Or maybe you'd prefer to call that C, which is so weird as to be completely normal.)

There's a place for both

I think the real reason the debate never dies is that both static and dynamic typing have their place. I use both, and the choice largely depends on what it is that I am coding. Analogies inevitably fail, but let's try one for this case:

If I'm building a treehouse for my kids* I'm probably not going to bother with the same sort of formal measurement that I would use to build a house for myself - line it up and cut it to fit will do most of the time. That flexibility is beneficial because I'm building around a tree, and being able reshape and redesign things easily makes a whole lot more sense in that environment (the other option being spending several weeks surveying the tree, and a lot of long tedious calculations to deal with its twisting organic structure).

If I'm building a house to live in I'm probably not going to just slap it together: I want to have some sort of design and measure everything against that as I go so I can be sure everything will fit together exactly as I want it to when I'm done. This more rigorous checking saves vast amounts of time over a flexible approach because I know that as long as all my measurements match up, the house will stay up - with a more dynamic approach I'll have to be testing everything every step of the way.

If I'm building a bridge or a bank vault I'm probably going to have another level again of formal requirements and calculations to ensure not just that things fit together, but that I can be certain that it will be safe in all weather conditions, or properly secure from potential thieves. Bothering to have detailed specifications from which extensive calculations and proofs of deep properties can be made is going to save me effort in the long run - it's just another level of formality again, above that used to design and build the house.

Now to my mind these three cases represent dynamic typing, static typing, and full formal specification (for those who think static typing in Haskell is as high as you can go, check out HasCASL). Each has their own place, and it's a matter of what it is that you're doing that determines which is the most suitable approach. If you mostly work on treehouses then someone telling you that you need to be properly measuring everything is going to seem like a pedant who just wants to annoy you and slow down your treehouse building. If you mostly build houses then someone telling you to hold up the 2x4 to where it needs to go and cut it to shape is going to seem reckless and foolhardy. As long as you're looking at everything only through the type of things you build the other point of view will always look silly.

* (I am not trying to suggest that dynamic typing is for inconsequential or throwaway projects, I just wanted an example that is fairly dynamic and works best with an evolving approach)

Style Remains

Allow me to play the devil's advocate. I don't believe that the treehouse v. house v. bridge analogies map so cleanly to type systems -- actually, I think they work much better as analogies to design approaches. The treehouse, for example, is representitive of the programmer who does not formally design his or her application, but rather codes on the fly and lets the design happen in the process. The house analogy works well with the notion that you should design, but accept that some things may change or not work as expected and work around that on the fly. (Which is what I have often experienced in typical academic and industrial design.) The bridge scenario, of course, is one in which "things may happen" is unacceptable, and all scenarios must be considered (which is probably the approach they have at NASA when sending astronauts into space).

As I view things, and perhaps you as well, applying the analogies to approaches in design does not necessarily constrain the type systems used. It is fully possible to do mission critical design with scheme. It probably isn't done, but it certainly could be.

Personally, I think type systems have a greater impact on the implementation of a broad design than on that design itself. My design may call for a factorial function, but viewed only as such, one cannot conclude that a specific type system is needed for this task. Any would do, and in either a dynamic or static setting, type checks could be applied -- although in the former, one could certainly choose not to and trust in their ability not to cause a type mismatch down the line. But I am just going back, I think, to my original thought that at the base of all arguments on type systems is reliance on personal preference and philosophy. And I certainly agree with your idea that static and dynamic type systems do have their place, and hopefully we can agree that this place is not rigidly defined.

style or ignorance?

If we had already established for some category of software and development approach that there was no significant difference in quality or cost or ... then we might whimsically choose between statically checked and dynamically checked languages.

Do we have that data?

Bilbao museo

If I'm building a treehouse for my kids* I'm probably not going to bother with the same sort of formal measurement that I would use to build a house for myself - line it up and cut it to fit will do most of the time.

I would wonder in which category Frank Gehrys Guggenheim museum falls? Do we have already languages expressing something analog?

Kay

Brainf*** maybe?

Brainf*** maybe?

I believe that's where the

I believe that's where the analogies start to break down...