## Bruce Tate: Technologies that may challenge Java

An article by Bruce Tate about a few 'technologies' that may challenge Java. The first one is 'Dynamic Languages.' He uses one example where two variables are operated upon and assigned in a single line: x1, x2 = x2, x1+x2, in contrast to Java approach which takes more lines. He also says "With Ruby, types are dynamic, so you don't have to declare them." If I am not mistaken, both can be done (and are done) in statically typed language. 'Single line, multiple assignments' could be done using tuples and type inference allows one to avoid declaring types.

He also mentions continuation servers, meta programming and convention over configuration. As an aside, while describing continuations, he writes: "Continuations are language constructs that let you quickly store the state of a thread, and execute that thread later." As far as I know, continuation simply allow an explicit return address of a function/procedure to be provided (rather than the default).

## Comment viewing options

As far as I know, continuation simply allow an explicit return address of a function/procedure to be provided (rather than the default).
Sounds like a CPS, one of multiple applications of the notion of continuations, but definitely not a definition of continuation.

And yes, one can think of a continuation as a snapshot of a thread (e.g., see Continuations and threads: Expressing machine concurrency directly in advanced languages).

Oops, the old link to the paper does not work, use this instead.

Thanks! I didn't realize I was thinking of CPS and continuations as the same thing. I'll be sure to read the paper as well.

### Dynamic languages

Yeah, people seem to make this mistake a lot. They assume that any feature in Ruby/Python that is not in Java/C++ is not possible in a "static language". It's painful to see such widely published misinformation.

His points:
- simplicity of "Hello World"
- parallel assignment
- Ruby blocks, looping functions
- metaprogramming

The first two are obviously nonsense. Type inference can make the third just as easy in a static language.

The fourth is interesting. Though there are things like MetaOCaml, Template Haskell, and Nemerle macros, these metaprogramming facilities don't allow the kind of wacky runtime manipulation allowed in Ruby. However, I think this limitation isn't caused by the "static-ness" of a language but by the fact that most static languages are geared for runtime efficiency, not runtime mutability. If you're willing to sacrifice some speed, I think metaprogramming can be made just as easy.

One valid difference is that dynamic languages are primarily structurally typed while static languages are primarily nominally typed (simply because that's the most natural way to implement them). I think it's fair to say that structural typing can be more flexible.

### Nonsense?

You say "the first two are obviously nonsense." Okay, the "parallel assignment" point is pointless, but I think you're too quick to dismiss "hello, world." The size of a minimal printing program is IMHO a rough but useful measure of how much meaningless crap must be included in any working program. If you have wrap "print 'Hello, world'" in "class Hello { public static void main(String [] args) { ... }}", it's likely that other similarly-simple programs will require analogous encumbering garbage.

### Author's intent unclear

Right, but Tate's first example to illustrate his claim that "Dynamic languages can be much more productive than static ones like C++ or Java" was "Hello, World". If you take the mention of C++ and Java to be intended as examples representative of static languages, then Kannan's point is valid, since e.g. the Haskell for Hello World is just print "Hello World". OTOH, if "like C++ or Java" was intended as a qualifier intended to exclude some more concise/expressive/whatever static languages, like Haskell, then Tate's statement is valid. But that's hard to defend, because the Hello World example shouldn't be in the "dynamic language" category then, but in a "more expressive language" category, or similar.

Anyway, Kannan is right about the truly interesting aspect of dynamic languages — it's the ability to easily do things at runtime that require more hoop-jumping in static languages. (Perhaps I should add "like C++ and Java" to cover my ass.)

I have an example that I think is a bit more real worldish than Hello World, yet still shows where a dynamic language shines - reading a file:

Java: [ many lines of code that this forum software doesn't want to accept ]

vs Tcl:  

set fl [open "somefile"]
set data [read $fl] close$fl

where $f is the channel ID as returned by open. There... "If you only work with small files, that's a perfectly reasonable approach." I'm sure that my files are, yeah, smaller than yours since i won't be using verbose and redundant xml as often with Tcl as i would with java. "x for scripting, y for application development." Application software development is not something as clever and secretive that only programmers of statically typed and severely constrained languages, like y, can participate of. Creative programmers can assemble rather original and creative applications out of the automation of several disparate little utilities glued together with x. In fact, many xs encapsulate such utilities in the same language as extensions or modules, giving you a fully integrated and self-contained development environment. You should try it. It would give you a different perspective on what application programming is all about. "ignored the power that Java gives you, because the problem space you mention is so small..." Everything is smaller and simpler outside Javaland... ### Language Shootout examples Humbly suggest that the Computer Language Shootout already provides enough tiny example programs in different languages, for example sum-file. ### Funnily enough, statically ty Funnily enough, statically typed Haskell has you beat: readFile "somefile" ### For a suitable value of "beat" Where "suitable value" is a LOC metric. However, it ignores the performance options. ### haskell is likely to beat Java at the performance level no matter if you use a BufferedInputStream or other annoyingly named class... most results shown here seem like a testment to that: http://shootout.alioth.debian.org/ ### Is that so? Perhaps you should study this link then. It seemed like a reasonably good I/O benchmark, and says volumes. In particular, note that Java came in 4th, right behind C and SML (way to go ML!). Also note that Haskell came in 20th, and was 15x slower than the Java version. Perhaps you were looking at a different test? ### Well... Well, performance on the Computer Language Shootout is only as good as the programs people contribute. fasta also spends a chunk of time generating random numbers and selecting from a probability distribution - Haskell does better on sum-file. (Note: these links show Java 1.5) ### True I didn't mean to disrespect Haskell, as I think it's a fine language (except for rampant overloading of the whitespace operator! ;). And for all the development effort spent on Java, it would be embarrassing if it didn't do well in many of the tests. But it would be much nicer if people got their facts straight before slinging accusations. ### OT: space leak From the looks of the memory consumption on the fasta test, it looks like the Haskell version is suffering from a laziness induced space leak. ### readFile certainly isn't the readFile certainly isn't the only way to read a file in Haskell, if that's your worry. The main complaint about Java is that it doesn't contain any of the useful combinations of its facilities - much as if Haskell left you to declare functions like map and sum in terms of fold. ### The library sucks, not the type system In Haskell I can write readFile "somefile" if I want to just read the file contentes. If I need to convert the value I can the read$ readFile "somefile" (the standard conversion) or myParser $readFile "somefile". ### Nearly. You've missed out the Nearly. You've missed out the fact readFile's in the IO monad. ### Duh \smacks self in forehead. Err... hmmm... let's assume I meant this $


($) :: Monad m => (a -> b) -> m a -> m b f$ m = m >>= (return . f)


Or just: liftM myParser $readFile "somefile" ### That fragment can be statically typed. I thought the GP post explicitly clarified that we're talking about dynamic vs. static, not dynamic vs Java. Anyway, your example was: set fl [open "somefile"] set data [read$fl]
close $fl return$data


That looks statically typable to me.

It may be a little longer in Java because:

• Java lacks local variable inference for declarations with initializers. C# 3.0 has it. It's trivial to implement.
• Java's standard library lacks a function that reads all the lines from a file (right?). Static typing does not preclude defining such a function.

### Parallel assignment counterexample

He uses one example where two variables are operated upon and assigned in a single line: x1, x2 = x2, x1+x2, in contrast to Java approach which takes more lines.

Parallel assignment certainly doesn't require a "dynamic language" (if by that he means dynamically checked). The venerable occam programming language, which is most certainly statically typed (and does not use type inference), supported multiple assignment a decade ago: x, y, z := 4, z, x is perfectly valid occam, assuming that all of the variables have been previously declared and are type-compatible. I'm sure there are plenty of other languages that likewise support multiple assignment.

### Interesting note...

I bought his new book, Beyond Java, due largely to the enormous hype, and the fact that a lot of people are going to read this book and I may as well be conversant in its arguments. So far, it's exactly what you'd expect from the linked article. I know I'm not really in the target audience, exactly, but frankly the book's pretty weak. He does a good job identifying a lot of the weaknesses of Java, and he seems to do a good job presenting what he does choose to cover, but his conclusions and a lot of his terminology betray his ignorance of what's really out there. For instance, he uses the word "dynamic" as an exact synonym for "good", with absolutely no critical examination of that assumption, and while many of his criticisms of Java are type system-related, his only conclusion appears to be that static analysis is a complete waste of time.

However, one very interesting thing... There's a one-page section on functional languages toward the end of the book. He puts it in the "minor contenders for Next Big Thing" section, and very notably it's separate from the Lisp section. He briefly discusses Haskell and Erlang, and while everything he has to say is positive (he even goes so far as to say "[Haskell is] easy to teach," which is pretty remarkable), his conclusion is that these languages are not yet ready to be the Big Thing. In fact, according to Tate, "Erlang is in its infancy as a general-purpose language." Infancy! I think that provides a pretty interesting insight into exactly what level of adoption the industry expects...

But I really find it remarkable that we've gotten to the point where "it's a bit too early to be talking about functional languages." If the perception is that Haskell is in our distant future, that it's a good thing that will someday arrive, well, I find that surprising. Surprising and good.

### I know I'm not really in the

I know I'm not really in the target audience, exactly, but frankly the book's pretty weak.

Yeah, it's a book geared primarily to Java developers and trends that the typical java programmer should be looking out for. The book wasn't meant to delve into type theory.

...he seems to do a good job presenting what he does choose to cover, but his conclusions and a lot of his terminology betray his ignorance of what's really out there.

What is really out there?

For instance, he uses the word "dynamic" as an exact synonym for "good"

I didn't read that. I read that he felt he was productive in Ruby which happens to be dynamically typed. I wish he would have commented on Boo, but I guess Boo is too new.

In fact, according to Tate, "Erlang is in its infancy as a general-purpose language." Infancy! I think that provides a pretty interesting insight into exactly what level of adoption the industry expects...

But I really find it remarkable that we've gotten to the point where "it's a bit too early to be talking about functional languages." If the perception is that Haskell is in our distant future, that it's a good thing that will someday arrive, well, I find that surprising. Surprising and good.

I'm not sure what you're getting at there. Are you trying to say Haskell is the holy grail? If anything, Ruby would get Java programmers more of a functional style than Java, but closer to Lisp than Haskell.

### Responses, clarification, etc. etc. etc.

I know I'm not really in the target audience, exactly, but frankly the book's pretty weak.

Yeah, it's a book geared primarily to Java developers and trends that the typical java programmer should be looking out for. The book wasn't meant to delve into type theory.

But I knew that going in. I'm a professional programmer, and don't consider myself unusual in that capacity. I realize that I spend more time thinking about basic computing than some of my peers, but I think that's unfortunate and regrettable.

A large part of the book seems to be an attempt to convince people that they need to learn something new, against their wishes and better judgement. "I admit unashamedly that I liked having my head in the sand," leads to, "It's time to start paying attention again." Who are these so-called programmers who need to be cajoled and wheedled into thinking critically about programming, into lifting their eyes to the horizon?

Programmers should love programming. That's an axiom of mine. I have no sympathy or patience for people who are programmers because they heard they could make good money, or who think of themselves as "Java programmers" first and "programmers" second. I'm sorry if that seems overly harsh. Programmers should love programming, and they should love learning. That's part of what it means to be a programmer.

...he seems to do a good job presenting what he does choose to cover, but his conclusions and a lot of his terminology betray his ignorance of what's really out there.

What is really out there?

That was too strong of a statement. I guess I was fired up, and I appreciate being called on it. But, a couple of examples:

• static type systems that are more powerful, flexible, and less obtrusive than Java's.
• the real differences and advances coming from Microsoft.
For instance, he uses the word "dynamic" as an exact synonym for "good"

I didn't read that. I read that he felt he was productive in Ruby which happens to be dynamically typed.

From page 2: "More to the point, I've found that Java does everything that I need, so I haven't looked beyond these borders for a very long time... I know some languages are more dynamic, and possibly more productive in spurts..." The clear implication is that dynamic has an inherent positive value, a value that Java must compensate for elsewhere (in market share, in this case).

In fact, according to Tate, "Erlang is in its infancy as a general-purpose language." Infancy! I think that provides a pretty interesting insight into exactly what level of adoption the industry expects...

Language adoption. My point is that I (and probably most people on LtU) would not use the word "infancy" to describe Erlang. I think of Erlang as mature, commercially-backed, and having a large community. We spend a lot of time talking about the old catch-22: a language can't build community without being used, but nobody will use a language without a large community. The fact that, to Tate, Erlang is still in its infancy indicates to me just how severe that problem is. But in that gap lies competitive advantage, and I suppose that's where fortunes are made (if you're into that sort of thing).

But I really find it remarkable that we've gotten to the point where "it's a bit too early to be talking about functional languages." If the perception is that Haskell is in our distant future, that it's a good thing that will someday arrive, well, I find that surprising. Surprising and good.

I'm not sure what you're getting at there. Are you trying to say Haskell is the holy grail? If anything, Ruby would get Java programmers more of a functional style than Java, but closer to Lisp than Haskell.

No, that's not really what I meant. I just find it encouraging that functional programming is now perceived as being "the future," albeit a distant one, rather than a marginal and irrelevant academic curiosity, or perhaps worse, a dead idea from the past. I bet if this book had been written four years ago, there would've been no section on FP at all. That's exciting!

All in all, I think I came off as too harsh in my first post. I think it's fair to say that if you didn't like the linked article, you certainly won't like the book, but it's not a terrible book, and it could have a very positive effect. I just don't think it lives up to its potential, and it really saddens me that a book like this is necessary in the first place.

### as a general purpose language

"Erlang is in its infancy as a general-purpose language."

imo That isn't so unreasonable - is Erlang a "general-purpose" (whatever that means) language?

### A large part of the book seem

A large part of the book seems to be an attempt to convince people that they need to learn something new, against their wishes and better judgement. "I admit unashamedly that I liked having my head in the sand," leads to, "It's time to start paying attention again." Who are these so-called programmers who need to be cajoled and wheedled into thinking critically about programming, into lifting their eyes to the horizon?

The target audience is Java programmers. Is LTU just a site for academics to pontificate about type theory, or is it also meant to expose 9-5ers to other possibilities in the PL world? All I read him saying is "Hey guys, there's life beyond Java and other tools to consider". And from my experience there seems to be a lot of Java programmers and not just programmers.

Programmers should love programming. That's an axiom of mine. I have no sympathy or patience for people who are programmers because they heard they could make good money, or who think of themselves as "Java programmers" first and "programmers" second. I'm sorry if that seems overly harsh. Programmers should love programming, and they should love learning. That's part of what it means to be a programmer.

Yes, as I said above there seems to be a lot of "Java programmers" that have put a lot of resources and time just into Java. If you say "just use Haskell", then you'll get nowhere with these people.

That was too strong of a statement. I guess I was fired up, and I appreciate being called on it. But, a couple of examples:

* static type systems that are more powerful, flexible, and less obtrusive than Java's.
* the real differences and advances coming from Microsoft.

From page 2: "More to the point, I've found that Java does everything that I need, so I haven't looked beyond these borders for a very long time... I know some languages are more dynamic, and possibly more productive in spurts..." The clear implication is that dynamic has an inherent positive value, a value that Java must compensate for elsewhere (in market share, in this case).

I think the number of static type inferred languages that he would consider as alternatives to Java is pretty minimal. But I agree that him and Bruce Eckell seem to have a knack of equivocating terseness with dynamic languages.

Language adoption. My point is that I (and probably most people on LtU) would not use the word "infancy" to describe Erlang. I think of Erlang as mature, commercially-backed, and having a large community.

How much adoption is there and how big is the Erlang community?

No, that's not really what I meant. I just find it encouraging that functional programming is now perceived as being "the future," albeit a distant one, rather than a marginal and irrelevant academic curiosity, or perhaps worse, a dead idea from the past. I bet if this book had been written four years ago, there would've been no section on FP at all. That's exciting!

Perceived by who? If anything, I came away from the book thinking that Tate is more interested in productivity than functional purity.

All in all, I think I came off as too harsh in my first post. I think it's fair to say that if you didn't like the linked article, you certainly won't like the book, but it's not a terrible book, and it could have a very positive effect. I just don't think it lives up to its potential, and it really saddens me that a book like this is necessary in the first place.

Well at least it was a short read and a somewhat interesting topic.

### Naive examples

Java could have had an improved syntax, but the actual reason that it has inherited syntax from the C family of languages is familiarity.

I don't consider the 'hello world' program as a serious example. Are programs that consist of millions of lines of code going to be built only out of statements? the answer is obvious: no. Classes and methods are 99.999% of a program. So any 'puts' (or similar) statement is going to exist as part of a method anyway.

Multiple assignment is also a non-important feature. In how many places in a program is multiple assignment needed? the answer is again obvious: in very few. And having assignments separated actually increases program readability: if, for example, there was a line with 10 assignments, then the viewer would have had a hard time understanding immediately which value is assigned to what variable.

Dynamic typing has its place in the universe, and so does static typing.

Let's not forget that Smalltalk is around for more than 20 years, but it has never caught on. And last time I used Smalltalk (it was a few months ago), it was much slower than Java.

The reason Ruby on Rails is successful is because the J2EE framework is too heavy; it has nothing to do with Java. There are other ways to make web apps that are much easier than J2EE.

### Multiple assignment

In how many places in a program is multiple assignment needed? the answer is again obvious: in very few.

What if it follows from a more general mechanism? In Python it falls out as a pretty elegant special case of sequence unpacking. Sequence unpacking makes it easy to do multiple assignment and handle multiple return values from a function, among other things. The way it works is very simple: if, for instance, s is a three-element sequence then the statement a, b, c = s is equivalent to the statements a = s[0], b = s[1], c = s[2]. Whenever you have a comma-separated list of expressions following the '=' operator, a sequence is implicitly constructed which contains the values of those expressions. So a, b = b, a is operationally equivalent to a, b = [b, a]. (This implicit sequence construction also happens with the argument to the 'return' operator and in a few other cases.)

### Similarly it's beautiful as a

Similarly it's beautiful as a use of pattern matching in Haskell - I do it all the time.

### Actually you do both with exa

Actually you do both with exactly the same semantic in Oz.

local
[A B C] = D
in
...
end


and

{fun {$[A B C]} ... end D}  would be eqvivalent. ### please be specific Let's not forget that Smalltalk is around for more than 20 years, but it has never caught on. Smalltalk caught on in the late-80's / early 90's and then suffered the double-whammy of inept vendor strategy and "free as in beer" Java. And last time I used Smalltalk (it was a few months ago), it was much slower than Java. Which Smalltalk implementation, which Java implementation, for what task, and how much slower is much slower? ### Multiple assignment is also a Multiple assignment is also a non-important feature. In how many places in a program is multiple assignment needed? the answer is again obvious: in very few. It's another tool in the bag. Just because you don't find it useful doesn't mean much. The reason Ruby on Rails is successful is because the J2EE framework is too heavy; it has nothing to do with Java. There are other ways to make web apps that are much easier than J2EE. I don't think so. It's more about the language than RoR. I've read the book and there are lots of interviews with Java "experts" that are tired of all the boilerplate and lack of features that Ruby has and Java doesn't. Metaprogramming, closures, continuations, open classes, mixins, operator overloading.., and just a general concisesness to the language. Take away the fancy IDEs for Java and you've got trouble for many "Java programmers'. ### problem is... "Take away the fancy IDEs for Java and you've got trouble for many 'Java programmers'" That won't ever happen. Java the language and its IDEs are like salt'n'pepper. And java programmers are addicted to both. Specially when you can get high-quality ones for free as in Eclipse or Netbeans... BTW, the IDEs handle all the boilerplate for you, so many java programmers aren't too keen on arguments that go like "less tokens to type"... It's difficult to sell the benefits of nicer type systems ( or lack of ), metaprogramming, closures, mixins and the like to such audience, who mostly think Java as a top-notch technology and most other, "easier" stuff as just dismissable "scripting" tools... ### BTW, the IDEs handle all the BTW, the IDEs handle all the boilerplate for you, so many java programmers aren't too keen on arguments that go like "less tokens to type"... Yes, that was my point and why programming in Java isn't so painful when you have these IDEs. But it just doesn't smell right to me when IDEs have such a great supporting role. Not that I don't like IDEs because I do. It's difficult to sell the benefits of nicer type systems ( or lack of ), metaprogramming, closures, mixins and the like to such audience, who mostly think Java as a top-notch technology and most other, "easier" stuff as just dismissable "scripting" tools... Here is the server side discussion on the book - nearly 400 posts on the thread. There's always going to be people resistant to change. ### The IDE is the language Gregor Kiczales (of AOP fame) made this point (that the real language is what the IDE needs) quite cogently recently at a talk he gave at the CASCON 2005 workshop Trends in Aspect-Oriented Software Development Research. So in a way, Java becomes the assembler'' language, and IDEs like Eclipse generate this assembly from the higher-level language that it defines through its interface. There are others working on similar ideas. This is of course related to Intentional Programming, but also to something that energized the OOPSLA crowd, SubText (discussed before and recently). While this may be a bit scary for PL designers, it should be no different than other trends like javadoc, literate programming and embedded contracts, all of which are perceived by users as being programming language extensions, but are most often not done by PL designers (with exceptions, naturally). [Fixed the missing </i>, thanks] ### interesting but i think you should put and end to you "i" html tag. It's scrambling everything down there. Anyway, yes, the IDE user commands API is a non-written programming language in itself. I always get this feeling that i'm programming in a emacs keyboard-combo sublanguage that is far more productive than the actual programming language i'm using. :) ### It's another tool in the bag. It's another tool in the bag. Just because you don't find it useful doesn't mean much. Indeed, but it's hardly a reason to promote it as a criterion for choosing one programming language over another. I've read the book and there are lots of interviews with Java "experts" that are tired of all the boilerplate and lack of features that Ruby has and Java doesn't. My experience is different: J2EE programmers are complaining about the J2EE boilerplate (XML config files etc), whereas in Ruby everything is done from code. Java has closures, continuations are not need in web apps, open classes and mixins and operator overloading is not needed anyway... Take away the fancy IDEs for Java and you've got trouble for many "Java programmers'. When it comes to J2EE boilerplate, indeed Java IDEs are invaluable. But I can not see how a team can manage a big Ruby web app with vi or emacs only: IDEs are neccessary for any language. ### KISS One of the prime principles of good programming. And it also serves well as an organizational approach: remove most complexity and redundancy and you end up with simple things, promptly able to be managed by simple tools. "But I can not see how a team can manage a big Ruby web app with vi or emacs only: IDEs are neccessary for any language." I beg to differ. IDEs are mostly necessary with languages that rely on too much boilerplate and have too much dependency on external tools. They're not vital with more expressive languages. For team programming, i believe a source control software and well defined project layout are of far more relevance. Most of the time all i need for personal projects is emacs, a simple but comprehensible Makefile, interface to cvs and a shell to test it all. Of course, all without leaving emacs anyway... a powerful, flexible and expressive language also helps. A lot! :) ### Specific examples IDEs are mostly necessary with languages that rely on too much boilerplate and have too much dependency on external tools. They're not vital with more expressive languages. Would you consider Smalltalk to be a more expressive language? ### smalltalk is pretty expressive, i guess. But it relies on a huge OO hierarchy that is nearly impossible to manage without specific integrated tools. Alas, weren't GUIs and IDEs first shots with Smalltalk? ### No Actually, the first general-purpose (i.e. I'm not counting Ivan Sutherland's "Sketchpad," which was tied to the application) GUIs and what we would now call "IDEs" were on Lisp Machines at MIT. One of the reasons that users of Lisp Machines tend to go on and on about them is precisely that they were an extremely sophisticated environment that was nevertheless open: you could see and do anything you wanted, at any level, from device driver to window system and beyond. Ultimately, of course, this property would be shared by the Smalltalk machines—in fact, some of Xerox's machines could be either Lisp or Smalltalk machines, the choice being a "simple matter of microcode." With Squeak, we've come essentially full-circle on modern hardware, since you can generate the C source code for the virtual machine from the sources in Squeak anytime you wish. To get back to the main point, I don't see a correlation between the posession or lack of an IDE and the language's expressiveness. I do, however, see a correlation between a language's expressiveness and the level of integration of the IDE. One of Lisp and Smalltalk's great strengths—and great weaknesses—has historically been that its environments have been too well-integrated, making integration with other environments and turnkey delivery awkward at best. ### levels "I do, however, see a correlation between a language's expressiveness and the level of integration of the IDE." Does that applies to java and Eclipse? Because they sure are well integrated. ### To Exclude the Middle or Not? rmalafaia: Does that applies to java and Eclipse? Because they sure are well integrated. Java and Eclipse fall somewhere between, e.g. Metrowerks' Codewarrior IDE and a Lisp Machine or Smalltalk environment. They're certainly better-integrated than the former, thanks to the common VM, reflection, and effort to achieve said integration, but they aren't on a par with environments in which the compilers, documentation tools, debuggers, browsers, etc. are completely embedded in the image that contains your program. For example, consider that a Java Scrapbook in Eclipse runs in its own virtual machine. Note that it's a good news/bad news story: there are good reasons to have a single image and it was indeed amazingly powerful. On the other hand, with great power comes great responsibility; one wrong move in a Lisp Machine's structure editor on the wrong part of the system could make the whole system inoperable. Conversely, there are good reasons to want a "phase distinction" between your IDE and your application, not the least of which is the ability to deliver your application without the environment necessary to build it. The cost is some level of flexibility, customizability, and fluidity in the development process. It's perhaps worth observing that much of the evolution of the J2EE application development stack over the past decade has consisted of applying the Fundamental Theorem of Software Engineering ("All problems in Computer Science can be solved by another level of indirection" — Butler Lampson) to support, e.g. live update of code without halting the application server, or hitting a keystroke to go from a method invocation to the definition of that method—features that Lisp and Smalltalk environments had 25-30 years ago. I'm reminded of Grady Booch's comment at EclipseCon 2004: "For those of you looking at the future of development environments, I encourage you to go back and review some of the Xerox documentation for InterLisp-D." ### source code in files; how quaint "I mean, source code in files; how quaint, how seventies!" So Smalltalk is "pretty expressive" and yet Smalltalk language implementations are much more tightly integrated with an IDE than Java language implementations. (And the Smalltalk class hierarchy is no larger than the current Java or C# or... libraries.) imo It would be more reasonable to argue that expressive languages (Smalltalk, Lisp) make it easier to implement powerful IDEs. ### IDEs You mention Emacs as if it were the antithesis of an IDE. Well, I'm not sure exactly where you draw the line between IDEs and non-IDEs, so this seems like a discussion very open to bait-and-switch. Off hand, I cannot think of any significant piece of IDE functionality that couldn't be embedded in Emacs somehow, perhaps tediously. I do think there are kinds of functionality that are often associated with IDEs (sophisticated refactoring and code navigation tools, perhaps) that probably aren't freely available for Emacs. So, I'm tempted to try to counter your point (e.g. "efficient code navigation tools, as found in some IDEs, are key to high productivity when working in huge codebases") but it would be too easy to retort with "you could do that in Emacs too, in principle at least" (which would be absolutely valid). ### no "You mention Emacs as if it were the antithesis of an IDE" I just mentioned a simpler approach to IDEs. Emacs and Vim are very powerful text editors with scripting capabilities which let users freely extend the software featureset with ease. And, better yet, they both don't consume all the resources in your computer like a modern-day ( specially java-based ) IDE does. ### Tradeoffs Yes, complex GUI IDEs are far more computationally expensive than text-based IDEs. And having used both, I can say with confidence that I'm far more productive in a GUI IDE than I am in a text-based one, but the productivity gain is a factor of how well the language is integrated with the IDE (for instance, most C++ IDEs don't have terribly good integration, so the gains there are small). I use IDEs for the same reason I write functions and user-defined types. I encapsulate certain repetetive behaviors in a program, rather than my brain, and let the computer do the hard work. Refusing to use a good IDE is like refusing to write higher-order functions. You don't know what you're missing until you've tried it. ### trying I've programmed professionally with Delphi, Visual Studio and Eclipse. Even knowning them well enough and handling most actions via keyboard shortcuts, i still feel they sometimes just get in the way too much. Eclipse is wonderful and fully loaded with incredible refactoring utilities, but in the end of the day, you can see that many of its advantages are there to cope with java deficiecies as a language. Things that are a non-issue in other higher-level languages or easily remedied without much trouble. Organizing imports is a good thing when you need half a thousand of them to do anything, but hardly a trouble when you have less than ten. Or perhaps the common boilerplate of encapsulating private fields? Besides, i don't think it's the fact that they are bloated because they are graphical. They're bloated because they try very hard to integrate with all aspects of the underlying system. If you're modeling databases or activerecords or whatever like that, they open a connection to the DB and show you the available fields or something. People rely on graphical debuggers and their thousand messages generated as if it were a sure-bet thing, when 90% of the time it would prove easier to just take a peek at the stack trace. They sometimes can get in the way the same way as those annoying "autocompletion" resources of M$ Word... they sometimes confuse more than help with such a cluttered interface.

When I can, I use a good IDE, it's called emacs and don't come in my way, except when and if i tell it to. Vim is much of the same joy as well.

I would gladly tradeoff bloat and complexity for lightness and simplicity. But the industry ideal is that tools should be stupid so that stupid programmers don't cause much trouble...

### Eclipse

I don't want to get into IDE wars, and IDEs are perhaps less relevant to language theory than other topics, but I don't think they are completely off-topic. I agree that lots of imports can be annoying. On the other hand, the reason other languages get away with fewer imports is because they have fewer libraries. When you have as many libraries available as Java does, it simply becomes necessary to partition all the names into different packages. I don't see any other way to deal with a large amount of code in *any* language, although that might be a good research topic. Even in C++, there are times when a file will have more #include lines than source lines. The way I see it, the alternative is to bring every name into scope, which would simply cause a nightmare of problems.

When it comes to refactoring, that's something that's useful in any language for large-scale and long-term development. And most languages/IDEs don't do it as intelligently or exhaustively as Eclipse + Java does.

You're right that some of the bloat is due to a large number of features. What you're missing is that some of us actually need and use all of those features. If anything, I don't use most of the DB plugins available for Eclipse because they are not *powerful enough*, not because they are bloated. And when you consider how well VS integrates DB access with, say, VB, it's apparent that these features are very useful.

When it comes to graphical debugging, 90% of the time I just *do* look at the stack trace. But for that 10% when the stack trace doesn't tell the whole story, it sure is nice to be able to click on one of the variables in the local block and inspect is value without having to type out commands. It's also nice to inspect several variables in parallel, which becomes a lot more difficult in a CLI environment.

But the other features of Eclipse that I find particularly productive are the auto-complete, which is type-sensitive (it will suggest method completions that return a matching type first), inline JavaDocs, integrated SVN (I can look at my project tree and see exactly which files are dirty, what version they are, who last updated them, and when they were last updated, without typing a single command), integrated 3rd party docs (I can point a jar to its JavaDoc root, and get F1 help on 3rd party jars just as easily as builtins, using the exact same interface), integrated build tools (ant is more powerful than make, giving the power of text-based scripting, but giving GUI shortcuts like different target sets), integrated testing (one-button JUnit execution and nice IDE reporting, as well as interactive test-fail integration), integrated profiling (with TPTP, I get a nice call graph, call and time statistics, memory usage, method coverage, etc.), and completely configurable windows (you can dock them anywhere in any combination you prefer, and arrange them in sets via perspectives).

I've used a fair number of IDEs, and even though all of the features I mentioned would be useful in any language, I've never seen another IDE come close to Eclipse in the level of integration and the usability of the tools. In all fairness, Java receives a lot of commercial attention, which is mostly why Eclipse is so mature. On the other hand, when people say that their favorite text editor X is a nicer IDE, I seriously question whether they have used Eclipse in a production environment.

### featureful

"What you're missing is that some of us actually need and use all of those features."

I guess the point of this post is indeed to discuss if we really need all such "features", like xml configuration files, xml project build scripting, xml, xml and more xml. And then some UML and tons of java.

You need them because you're forced to use them. What if there are other ways out of this mess? Isn't that what the author is questioning?

"I find particularly productive are the auto-complete, which is type-sensitive (it will suggest method completions that return a matching type first), inline JavaDocs"

I won't argue that.

"integrated SVN (I can look at my project tree and see exactly which files are dirty, what version they are, who last updated them, and when they were last updated, without typing a single command),"

Yes, just make sure that view is always open. Otherwise, yes, you'll need a few commands to bring it on.

"ant is more powerful than make"

It's also incredibly annoying, complex and much more verbose than the simple "target: deps\n\t rules" of make. It's in absolute conformity with java's design: to be as redundant and verbose as possible, employing the state-of-the-art in current overblown trends... I think Java thinks of itself as a frustrated portable S.O.

"integrated testing (one-button JUnit execution and nice IDE reporting, as well as interactive test-fail integration)"

One-button execution after you've written the tests and made them into the ant build. Yes, at least 1 hour until you can click the button...

"integrated profiling (with TPTP, I get a nice call graph, call and time statistics, memory usage, method coverage, etc.)"

"Early Optimization is the root of all evil"
-- Donald Knuth

But being aware of it all of the time is the halmark of C/C++/Java programmers...

"and completely configurable windows (you can dock them anywhere in any combination you prefer, and arrange them in sets via perspectives)."

Can i just hide them all and have the editor open?

### "What you're missing is tha

"What you're missing is that some of us actually need and use all of those features."

I guess the point of this post is indeed to discuss if we really need all such "features", like xml configuration files, xml project build scripting, xml, xml and more xml. And then some UML and tons of java.

You need them because you're forced to use them. What if there are other ways out of this mess? Isn't that what the author is questioning?

There are many useful features in Eclipse unrelated to either xml or uml. I use a call graph display (shows where a method is called, recursively), the search results tree, hierarchy display (showing descendents and ancestors of some type), and the class outline. I don't need those, but they are really nice and I miss something similar when I'm programming in Haskell.

What I really need is the integrated refactoring support: being able to rename a method and having the appropriated changes reproduced through the project is priceless.

"integrated testing (one-button JUnit execution and nice IDE reporting, as well as interactive test-fail integration)"

One-button execution after you've written the tests and made them into the ant build. Yes, at least 1 hour until you can click the button...

Nope. You just need to write the test class and ask eclipse to run it as a JUnit test case. No need for Ant. Actually I never bothered to learn Ant until a few months ago, and I use TDD in Eclipse since 2002.

"and completely configurable windows (you can dock them anywhere in any combination you prefer, and arrange them in sets via perspectives)."

Can i just hide them all and have the editor open?

Yes. You can maximize any view (not only source) and hide all the others. I use it often.

"integrated SVN (I can look at my project tree and see exactly which files are dirty, what version they are, who last updated them, and when they were last updated, without typing a single command),"

Yes, just make sure that view is always open. Otherwise, yes, you'll need a few commands to bring it on.

Actually you don't need it. You can either have the view hidden (if you have other maximized) or out of range in a tab of views. I have 11 views in my regular Eclipse workspace, at most four open at any given time. Six views are tabbed in the bottom (I see only one of those any given time), two views tabbed in the right, source-code gets the rest of the workspace and two views are minimized and open only when they're needed (like drop-down menus).

An IDE (Eclipse in particular) isn't just a glorified text editor with a gui builder and lots of wizards. It maximizes the amount of context sensitive, useful, information about any given piece of code: from syntax coloring (actually it can use semantic info too: I have abstract method invocations in a different color from regular ones) to smart tooltips (javadocs and signature of methods in focus), each feature gives better context to the code, reducing time needed to understand it.

### What's an IDE

People have spent lots of time developing IDE-like features for Vim and Emacs. I've always found the results rather clumsy though just because at their heart, they are both console programs. And what I mean by that is that whatever gui functionality they have is hampered by the fact that for the most part they have to be compatible with the console verison of these programs.

### Clumsiness because of text

I agree I've found the attempts at IDEs within Emacs (I haven't tried Vim) to be clumsy most of the time - but to me, the problem isn't the non-GUIness, it's the fact that the poor thing is trying to use regular-expressions over flat text to extract meaning from code files. IDEs like Eclipse (or Smalltalk, for that matter) have a proper code database to work from. If Emacs parsed code, you'd have a much stronger base to work from. I guess my position is that it's not the user-interface at fault here, it's the underlying model of the code/program.

### code parsing in Emacs

In comment-11696, Tony Garnock-Jones wrote:

If Emacs parsed code, you'd have a much stronger base to work from.

There's no reason why it can't. AFAIK, JDEE (Java Development Environment for Emacs) uses Semantic which "is a program for Emacs which includes, at its core, a lexer, and a compiler compiler (bovinator)."

James Clarke's nXML contains a full-blown XML parser (and validator) implemented entirely in Emacs Lisp. (The latest version [gzipped tarball, 433K] has about 17,000 lines worth of *.el files.)

### Indeed, but it's hardly a rea

Indeed, but it's hardly a reason to promote it as a criterion for choosing one programming language over another.

...

Java has closures, continuations are not need in web apps, open classes and mixins and operator overloading is not needed anyway...

It's all syntax on top of machine code. Why do we need classes, interfaces, garbage collection?

But I can not see how a team can manage a big Ruby web app with vi or emacs only: IDEs are neccessary for any language.

IDEs are nice, but hardly a requirement to program in a language.

### IDEs

IDEs are nice, but hardly a requirement to program in a language.

Tell that to my students... If their life was at stake, they couldn't compile a Java program without a working installation of BlueJ -- at least, I think it's BlueJ, but it might be some other IDE. Similarly, at the time, I couldn't have compiled a Turbo Pascal program without the Turbo Vision IDE.

My point being that, for most people outside of the PL community, the IDE *is* part of the language.

### Delphi

In the case of Delphi, the IDE literally *is* part of the language. That's because language researchers have overlooked a very popular yet theoretically uninteresting compilation phase: "design time". This phase occurs before compile- and runtime, but is a critical phase in quickly building GUI apps. The reason many academic language designers look with disdain on such language features as properties is because many of them have not been forced to build extensive GUIs and lack the appreciation of how much work it is, and how sweet such a little bit of syntactic sugar such as properties can make it.

The interesting fact is that design time is a metaprogramming phase. The GUI builder generates code from UI actions. And from a parsing perspective, two-way design time tools can be quite challenging. It's somewhat remarkable that tools from Delphi to C++Builder to WindowBuilder have been created with virtually no attention from academia, despite the interesting theoretical challenges involved.

Anyway, my point is that you simply cannot say that you are "developing in Delphi" unless you have the full functionality of the IDE available, which makes the IDE a de facto part of the language.

### Not so surprising

It's somewhat remarkable that tools from Delphi to C++Builder to WindowBuilder have been created with virtually no attention from academia, despite the interesting theoretical challenges involved.

Actually, I believe that academia was quite interested. Just, not the PL community, but rather the interface community.

What kind of "theoretical challenges" would you consider involved in, say, Delphi development ?

On a related note, I half-remember witnessing a demonstration of an experimental multi-stage programming IDE in which the IDE interactively completed the program by adding the appropriate structure whenever it could guess it. I don't remember the details behind this, but it really looked interesting.

(edited:) Of course, the name of the project was Epigram.

### Code Generation and Parsing

Delphi isn't quite as interesting as C++Builder and Java, because all the form-building code hides in a DFM, which has its own language. However, it still has two-way editing, which essentially makes the form builder an integrated compiler/decompiler. It "compiles to" the GUI representation, and "decompiles to" the DFM representation. I think this would be theoretically interesting because most language systems are only designed to be transformed in one direction. Yet two-way GUI builders have to have bidirectional translation with good performance.

When you start talking about C++Builder and Java, it gets much more interesting, because the GUI building code occurs in the source itself, and only hardwired property values end up in an external file. So now to make it two-way, the IDE has to parse a source file with other, non-GUI building code intermixed and build a GUI representation for it, even if the code is not in the form in which it generated it. In a way, the IDE has to act as a limited integrated compiler/VM without getting confused by the code that isn't directly relevant to building the GUI.

If this doesn't sound hard, consider that some IDEs (like NetBeans) go the easy route and simply don't let the user modify the generated code. Considering how easy it is to confuse most two-way GUI builders, it's apparent that the systems are somewhat brittle and could benefit from more rigorous design principles. I think some formalization of the concept of design-time would help. However, since I'm not aware of any theory of design-time programming, I can't really even guess what clever people would come up with.

### Scary

OK, it looks not only hard, but downright suicidal. I mean, I can imagine doing this in Smalltalk or Python (not that I would be able to do it myself), thanks to the run-time reflection of the languages.

In C++ ? No way I'd even try.

### David, I'd argue that Delphi'

David, I'd argue that Delphi's use of the DFM file makes it _more_ interesting...Delphi source code is/was uncluttered by tons of GUI-related information, poorly translated into Object Pascal. The two-way editing would place an appropriate variable declaration into your class (along with inheritance and so forth), but beyond that your pascal code contained only your own logic.

Delphi's DFM files did, ten years ago, what XML-based user interface files are struggling to do today -- encapsulate much of the clutter and mismatch between a language for human coding and a language for layout and connection. They are not the same.

### Sounds like resource files on the Mac

Delphi's DFM files did, ten years ago, what XML-based user interface files are struggling to do today -- encapsulate much of the clutter and mismatch between a language for human coding and a language for layout and connection. They are not the same.

I'm not familiar with DFM, but the description reminds me of the resource files used in the classic Mac OS for GUI description (among other things). There was both a visual editor and a domain-specific language, Rez, for generating them.

In NextStep and Mac OS X, the Interface Builder creates "nib" files which are essentially serialized GUI objects. Supposedly, this method is less buggy because the visual GUI editor creates data instead of code.

### strong integration

Yes, Delphi users are very tied to the IDE. It was made that way with this very purpose. You could be crazy enough to generate all GUI programmatically and just apply the compiler via command-line, but it's not worth the trouble. Why? Because ObjectPascal is not expressive enough to handle nice use cases of programatic GUI creation for custom data, like templates or something.

I'm talking GUI creation in the lines of Tcl/Tk, where it's incredibly easy to build custom GUIs for repetitive data. It's all during run-time, but far more flexible.

It's not impossible to do in Java or ObjectPascal, just not worth it, because that's why they're selling you the IDE and because the languages alone won't help.

### Really ?

I personally prefer hand-coding the GUI, whether in the language itself or in some DSL. Of course, for rapid prototying, using a GUI designer is nice, but not much more so than a good drawing program, in my experience.

### There is a certain minimum that we need...

...to become really productive. After a certain point, additional features do not offer any real increment in productivity.

It is like throwing programmers to a project: after a certain point, adding more programmers will do more harm to the project than good.

### Multiple Assignment

Multiple assignment and tuple unpacking in Python makes it simple for a function to return multiple values.

Instead of declaring a function like this

 void divmod(int num, int denom, int* div, int* mod)

You can do this in python:

 def divmod(num, denom): return div, mod # example usage, which is arguably clearer my_div, my_mod = divmod(25, 10) 

### 3 line snippet

I get a little tired of "examples" posted that use the worst possible, most verbose Java, and then categorically state that the dynamic language is just oh so much better. If I have to read one more time that the "main" declaration is soul-destroying overhead, I'm going to throw up. :)
int x1 = 0, x2 = 1;
for (int i = 0; i < 20; i++)
System.out.println(i % 2 == 1 ? (x1 += x2) : (x2 += x1));

Yeah, I know, not quite as readable. But for Joe Schmo, that multiple assignment business isn't exactly inherently obvious either -- he's used to reading the comma as something else.

But all of this is besides the point. Scala (and I'm sure other languages) do something that no dynamic language I'm aware of does -- automatic production of views (translation of one type to another). Sure, everybody does coercion of int to double, but Scala allows you to go from Person to Address automatically, without having Person inherit or implement Address. Views are a means of specifying common navigational patterns in a rigorous way, such that the compiler can automatically get you from "here" to "there".

Hey, there are parts of Java that irritate the heck out of me. In general, anywhere you have to create type-specific methods that are cookie-cutter copies (for ANY reason, including performance), you're dealing with a failure of the language and/or compiler.

As far as I'm concerned programming languages in general just seem to be remarkably insensitive to the shape of the data they operate on (with the exception of the APLs of the world). Consider:

x = y func z

If func is +, and y and z are numbers, then this makes lots of sense. If y and z are vectors or numbers, APL will make sense of it, but Java and our new super-languages (like Ruby) will scratch their heads in amazement, dumbfounded at the concept. What if func is any arbitrary function? There aren't all that many common possibilities for combining the two -- foldl, foldr, each-left, each-right, etc...language designers should make a choice about defaults for these, then provide simplified syntax for getting to the other ones.

> 1 2 3 + 4 5 6
5 7 9
> 1 2 3 /+ 4 5 6
1 2 3
> 1 2 3 +\: 4 5 6
(5 6 7
6 7 8
7 8 9)
> 1 + 3
4

Long story short -- the big missing piece in the languages of today is the notion of function modifier, with which the programmer can tell the compiler to use something other than a sensible default for combining functions and arguments of various shapes. There's no richness for selection and shape-sensitive combination of functions beyond polymorphism and type matching; something that lets me write this:
1 2 3 (+;*;-) 4 5 6
> ((5 6 7)(4 10 18)(-3 -3 -3))
1 2 3 (+ -> (puts "added:"; puts); * -> (puts " multiplied: ";puts) 4 5 6
added:(5 6 7) multiplied:(4 10 18)

It amazes me that modern programming languages essentially say nothing about how to combine collections. Or maybe there is, and and LtU reader will conveniently provide links :)

### zipWith?

Maybe not exactly what you are looking for, but the zipWith family might be close...

main = do
print $unzip3$ zipWith (\x y -> (x+y, x*y, x-y)) [1,2,3] [4,5,6]
let (added,mult) = unzip $zipWith (\x y -> (x+y,x*y)) [1,2,3] [4,5,6] putStrLn$ "added: " ++ (show added) ++ " multiplied: " ++ (show mult)


...and of course, depending on your needs, you could always roll your own...

main = print $parallel_apply [(+),(*),(-)] [1,2,3] [4,5,6] parallel_apply [] _ _ = [] parallel_apply (f:fs) xs ys = (zipWith f xs ys):(parallel_apply fs xs ys)  ### What's modern? I think the basic function modifiers are just that, fold, map, scan, fst, snd, or cartesian product with >>= or a list comprehension. I'd use those directly in Haskell. For that matter, what about list comprehensions? Python has those, and that makes it easy to do most of what you describe. I agree, more languages should do this. Even list comps don't make it easy to do useful things like:  swing :: (((a -> b) -> b) -> c -> d) -> c -> a -> d swing f = flip (f . flip ($)) 

More often written as swing = (.).(.), this combinator switches the place of the data and the function.

Personally, I'd like to find a comprehensive collection of basic traversal patterns like map, fold, scan, fst, and swing. Any suggestions?

### swing?

I'm still pretty confused as to what swing does, but at least one of you definitions is wrong:

Prelude> let swing = (.).(.)
Prelude> :t swing
swing :: (b -> c) -> (a -> a1 -> b) -> a -> a1 -> c
Prelude> let swing f = flip (f . flip (\$))
Prelude> :t swing
swing :: (((a -> b1) -> b1) -> b -> c) -> b -> a -> c


I don't know which is wrong though, since I don't know what it's supposed to do.

### swing, by Cale Gibbard

Cale Gibbard

A useful little higher order function. Some examples of use:

swing map :: forall a b. [a -> b] -> a -> [b]
swing any :: forall a. [a -> Bool] -> a -> Bool
swing foldr :: forall a b. b -> a -> [a -> b -> b] -> b
swing zipWith :: forall a b c. [a -> b -> c] -> a -> [b] -> [c]
swing find :: forall a. [a -> Bool] -> a -> Maybe (a -> Bool)
"swing find" applies each of the predicates to the given value,
returning the first predicate which succeeds, if any
swing partition :: forall a. [a -> Bool] -> a -> ([a -> Bool], [a -> Bool])


As you can tell, Cale Gibbard discovered swing. I'd like to hear about any other cute but little known combinators like this one.

### VarArgs

I definitely agree that APL's ability to deal sensibly with larger chunks of data is a good thing, but it's by no means unique to APL. In lisp for instance you can do (+ 1 2 4 8) and get 15. The trick is that (in lisp) "+" is a function defined to operate over any number of numbers. This feature is supported in many languages by allowing functions to be defined for arbitrary numbers of arguments--something which you could do even in C (using varargs)--but which seems to have fallen out of favor in many recent languages.

### "modern" language snippets.

In J (http://www.jsoftware.com) one can do:

1 2 3 (+;*;-) 4 5 6
+-----+-------+--------+
|5 7 9|4 10 18|_3 _3 _3|
+-----+-------+--------+
+-----------+------------------+
+-----------+------------------+

This has been possible since 1991.

### CLU (from MIT) has had multiple value return for a long time

Circa 1978 - it was influenced by Lisp and SIMULA.
I developed DEC Vax applications in it for years
before I learned C.

http://plg.uwaterloo.ca/~rgesteve/cforall/multiple.html

### I feel the urge...

To bring up this quote (from the LtU quotes page, of course):

I have reaffirmed a long-standing and strongly held view: Language comparisons are rarely meaningful and even less often fair. A good comparison of major programming languages requires more effort than most people are willing to spend, experience in a wide range of application areas, a rigid maintenance of a detached and impartial point of view, and a sense of fairness. -- Bjarne Stroustrup, The Design and Evolution of C++

### Mr. Stroustrup, meet Mr. Godwin

Indeed. And yet, somehow, LtU seems to suffer from the PLT version of Godwin's law:

As an LtU thread grows longer, the probability of a language pissing match occurring approaches 1. -- with apologies to Mike Godwin

### Actually this phenomenon is q

Actually this phenomenon is quite recent, and very unfortunate.

### More likely ST vs DT

As an LtU thread grows longer, the probability of a static typing vs. dynamic typing debate occurring approaches 1. -- with apologies to Mike Godwin and Allan McInnes

### Yokomizo's corollary to McInnes's reframing of Godwin's law

True :-) I was tempted to mention that as well. But I was trying to stay on point with respect to Ehud's original comment. Yokomizo's corollary is perhaps the more general observation though: in addition to occurring by themselves, static-vs-dynamic typing debates often seem to be the ultimate end-point of many of the language pissing matches.

As Ehud mentioned language pissing matches are a recent fad in LtU, while ST vs. DT are one of our most important traditions :P

P.S.: Such an honor, I'm part of the Godwin Law's meme =)

:)

### Meme growth

In any forum, as any thread grows longer, the probability of someone coining a local equivalent of Godwin's Law approaches 1. :-)

### Midnight haiku

As any thread grows, its focus blurs.

As any idea gets generalized, its depth dwindles.

Off-topics erupt.

### Unfortunately the winner of a

Unfortunately the winner of any language pissing contest is the language which actually get used, in which spirit I offer Read's Corollary to Greenspun's Tenth Rule of Programming:

Any sufficiently complicated Common Lisp or Scheme program will eventually be rewritten in C++, Java, or Python.

### i'm currently writing code in

i'm currently writing code in java that implements the controller in an mvc setup with a web page as the view. it supports arbitrary structures, imposing no constraints except that they follow java bean conventions (which allow you to address contents via a tree of accessor functions, so model.contacts.addresses[2].street is the street of the second address in the "contacts"). new elements (a new address, say) are constructed dynamically, with the type inferred from introspection, and constraints on exactly what operations are permitted on particular components are provided via mixins, added with aspectj.

the code is pretty neat, understandable, and reasonably concise.

so i'm left wondering what a "dynamic" language would do to improve things.

ok, i'm not really wondering. some improvements are obvious. it would be nice if the whole thing system were more elegant/compact (you have to learn quite a lot to do this - there's a lack of consistency in places). there are some rough edges i've glossed over. but a lot of what a "dynamic language" buys you is already possible.

incidentally, if people are tired of some of the sniping at java made around here at times, i just found this site which looks quite interesting.

### Just for fun...

so i'm left wondering what a "dynamic" language would do to improve things.

This is definitely a fair point. I have a couple of comments, not really because I disagree, but just as food for thought...

First, given the approach you've taken, one might as easily ask, "What is Java buying you?" It appears that you've gone far enough down the reflective road to Neverland that the Java type system isn't going to provide much safety (or are all those bean conventions and aspect-oreinted mixins statically checked?), and you mentioned a lack of consistency and a few rough edges... So what's the argument in favor of Java here?

Second, there are a few pretty basic assumptions in the JVM about how you'll use reflection, run-time class generation, etc... For instance, I learned just a few weeks ago that Java's memory model places class definitions in the "permanent" GC generation, which has a whole host of implications related to performance tuning (and even correctness) of programs that generate large numbers of novel classes at run-time. I'm not sure whether your application would be affected by that, and there are ways to tune this, but the point is that the JVM is designed and optimized for a particular usage pattern, and applications like yours are pretty near the boundaries of that pattern, if not beyond.

I don't want to become (or appear to become) Mr. Dynamic here, because that's not really how I feel. But like Anton (if I may flatter myself) I do tend to argue on whichever side is necessary to maintain a balanced perspective. (I hope that I don't end up merely arguing on whichever side is necessary to maintain an endless debate.)

### Well...

Andrew: sniping at java made around here at times..

Matt: I hope that I don't end up merely arguing on whichever side is necessary to maintain an endless debate.

Personally I feel the LtU is dying, and the reason is that we have become a debating society. The current thread is one of the worst examples, to my mind.

Clarificatio: Lest it be misunderstood, I didn't quote Andrew and Matt because their posts are problematic. They simply point out something that I feel is problematic.

### I agree, back to why things are the way they are?

Andrew made a relevant comment once that I treasure:

[discussion of the merits of lazy, eager, macros snipped]

The idea isn't to blunder into a language looking for evidence to support a bunch of preconceived misconceptions. the idea is to understand why things are the way they are.

macros have advantages and disadvantages. graham lists them in that same chapter. understanding those lets you choose the right tool for the job. that's why it's a good book. if you're not interested in choosing the right tool, but only in proving that something is "better" or "worse" than something else,
then why waste time reading books? the world is complex and the more you understand the less certain things become. if you prefer simple black and white then it's better not to learn...

(maybe i should add that i don't use lisp;
that normally i find it's lisp users who are annoyingly dismissive of other languages; that i, too, have criticised things before understanding them at times;
also it's monday morning and i'm still on my first cup of coffee)

-- andrew cooke

In place of macros, feel free to substitute typing (dynamic, static, leavened, or unleavened), procedural, functional, mutable, immutable, or anything else.
Everything has good and bad points. I'm here to understand 'why things are the way they are' rather than proving better or worse. I really want to learn new stuff. I don't want to snipe at anything or anybody. Can we get back to our regularly scheduled learning about why things are the way they are?

### i'm not creating classes dyna

i'm not creating classes dynamically, just writing generic code, so the worry about GC is not a problem (although that's just luck - i didn't have a clue that limitation existed).

on the other question, about what the java type system buys me - what i'd like to say is happening is that the untyped decisions are well defined, isolated, and pushed up to the configuration layer (in "untyped" xml). the hope being that this is sufficiently high level that the types are fairly obvious (it's the "business" layer, and hopefully an operation like "validateNewUser" is obviously applicable to a user rather than a calendar date, say).

however: (1) that's a rather retrospective justification, and i'm not convinced i've achieved it; (2) it would be even better if the business logic were strongly typed too!

assuming i ever finish this thing, i'd like to write it up - i feel i've learnt a lot and there's not much out there (that i can find) that addresses building relatively complex web apps (ok, there's absolutely piles of stuff on exactly that, but little that seems to talk about things i'm interested in). and if that ever happens, this is something i'd really like to look back on and assess more carefully.

certainly i am seeing the occasional runtime type error. unfortunately.

### kind of lost the thread there

kind of lost the thread there. so i'm not sure what this has to do with "dynamic" languages (but then what are they etc etc). as ever, what i'd prefer isn't removing types, but better types. this sort of code could be fully type checked and the configuration could be a nice, typed, dsl.

in my dreams.

thinking more (as you can see, i'm just hurrying along trying to get this done, and haven't thought much about why i'm doing what i'm doing) a lot of this is driven by lack of type verification in xml configs. in other words those untyped "moments" that are pushed up to the business/config/whatever layer could be typed there too.

and you the blame for that (apart from me or "java") is that i rely heavily on the spring toolkit.

and why does spring rely on xml? perhaps because java isn't the kind of "dynamic" language in which you want to configure/script your application (and xml is?! ;o)

well, that was fun. back to coding (this is my pension plan, if i'm lucky).

### A Response to Beyond Java By Bruce Eckel

A response by Bruce Eckel raises a few points, and a few misconceptions. I haven't read the book, so I can't comment on his comments. However, there are a few things of interest to LtU readers.

Let's get the big error out of the way first: Python 2.5 coroutines do not allows straight-forward continuations by my reading of the PEP. The reason is that coroutines save only the latest continuation, and hence you can't resume prior continuations, which is necessary, for example, to deal with the pesky back button.

He also mischaracterises Martin Fowler's argument about humane interfaces. This, I think, is of interest to LtU as it is about a core issue of language design. Martin's argument (and I here I hope to avoid messing it up myself) is that interfaces should be designed to make the common cases easy. This is in direct contrast to the usual orthogonal interface, where nothing is provided that can be achieved another way. Having programmed a lot in an orthogonal language (Scheme) I can say it makes for a nice object of study but sucks for actually getting work done (which I why I use a humane implementation of said language). So I agree that interfaces, within reason, should make the common cases easy. Thoughts? Oh, and if you respond, please address the issue not the example, which most respondents to Martin's post have failed to do.

### Metahumaneness

Humane interfaces start getting in the way as soon as you start using the language for metaprogramming. More than once I cursed "Java's minimal interface" and longed for something really minimalistic. I shudder imagining something more than 3 times as complex.

Maybe it's time to bring up the topic of kernel language (either as in AKL, or as explained in CTM book)? So if you have to cover all functionality in your metaprogram, you use the minimalistic kernel. If you are programming, um, manually - you use all the syntactic flavourings and seasonings you fancy.

Hmm, sounds almost like macros in Scheme :-)

### Eightball says...

I have a friend who does a lot of Java work -- he never considers Java, not even with the libraries, as a language. To him, the language starts with Eclipse, intellisense and automated refactoring. In a sense, raw Java is the kernel language, and Eclipse is the both the syntactic sugar and the UI to use it.

### Humane interface on top of orthogonal interface

So I agree that interfaces, within reason, should make the common cases easy. Thoughts?

An orthogonal interface is nice if you have to implement the library, or if you want to define its semantics. For easier digestion you can put some sugar coating on the orthogonal library. Now, since this is library design, there's still room for endless discussions about what goes where, but I think the separation is useful.

### the issue not the example

If the example doesn't stand-up how much of an issue really exists?

### Eckel's defense of Python

That's what his article over at Artima is really about. There's a video of Bruce speaking at Berkley about Java 5 and he absolutely trashes it. And is no secret that he's a Python evangelist these days.

### Do you know anything concrete

Do you know anything concrete about the video or is it just gossip? It is of course clear that Eckel is critical towards Java generics and he did in depth inquiry over the last two years in this matter but this is hardly new. His blog documents the struggle with Java 5 for quite a long time.

### The video is not a myth

It's out there. I've watched it. Check his web site or, as always, google is your friend.