Critiques

Design Patterns 15 Years Later: An Interview with Erich Gamma, Richard Helm, and Ralph Johnson

Larry O'Brien recently interviewed three of the Gang of Four about their seminal work on patterns. Larry teased the interview's readers for awhile, but he eventually asked the pressing question that most language designers ask and debate about patterns ;) Here it is:

Larry: GoF came out relatively early in the ascent of OOP as the mainstream paradigm and, for better or worse, "patterns" seem to be associated with OO approaches. You even hear functional and dynamic advocates boasting that their languages "don't need" patterns. How do you respond to that?

Erich: Just as an aside, it is also easy to forget that we wrote design patterns before there was Java or C#.

Ralph: Some of those languages don't need some of the patterns in Design Patterns because their languages provide alternative ways of solving the problems. Our patterns are for languages between C++ and Smalltalk, which includes just about everything called "object-oriented," but they certainly are not for every programming language. I don't think anybody actually says that programmers in other languages don't need patterns; they just have a different set of patterns.

Erich: Design patterns eventually emerge for any language. Design déjà-vu is language neutral. While these experiences are not always captured as patterns, they do exist. The design principles for Erlang come to mind.

Larry: Where would a person go to learn about patterns for dynamic and functional languages? Who's making good contributions?

Ralph: If by "dynamic" you mean dynamic object-oriented languages like Smalltalk, Ruby or Python, then our patterns are applicable. Functional languages require different patterns, but I don't know who is working on them.

Note: At the end of the interview, Erich says that they tried refactoring the patterns into new categories in 2005. The draft breakdown he provides (accidentally???) takes out Memento, Chain of Responsibility, Bridge, Adapter, and Observer.

As I said above these are just notes in a draft state. Doing a refactoring without test cases is always dangerous...

UPDATE: The Gang of Four have an accompanying article for the interview that they wrote as a group. See A Look Back: Why We Wrote Design Patterns: Elements of Reusable Object-Oriented Software.

Phosphorous, The Popular Lisp

Joseph F. Miklojcik III, Phosphorous, The Popular Lisp.

We present Phosphorous; a programming language that draws on the power and elegance of traditional Lisps such as Common Lisp and Scheme, yet which brings those languages into the 21st century by ruthless application of our “popular is better” philosophy into all possible areas of programming language design.

Introduces the concept of the Gosling Tarpit, and presents a novel method for having both a broken lexical scope (needed for popularity) and maintaining one's reputation as a language designer.

(via Chris Neukirchen)

A Computer-Generated Proof that P=NP

Doron Zeilberger announced yesterday that he has proven that P=NP.

Using 3000 hours of CPU time on a CRAY machine, we settle the notorious P vs. NP problem in the affirmative, by presenting a “polynomial” time algorithm for the NP-complete subset sum problem.

The paper is available here and his 98th Opinion is offered as commentary.

Scaling Type Inference

Coding Horror is a popular programming blog. A recent post concerns type inference in C#:

C# ... offers implicitly typed local variables. ... It's not dynamic typing, per se; C# is still very much a statically typed language. It's more of a compiler trick, a baby step toward a world of Static Typing Where Possible, and Dynamic Typing When Needed.

... I use implicit variable typing whenever and wherever it makes my code more concise. Anything that removes redundancy from our code should be aggressively pursued -- up to and including switching languages.

You might even say implicit variable typing is a gateway drug to more dynamically typed languages. And that's a good thing.

I think this post is interesting for a number of reasons, and the link to LtU is just the start. Now it appears the author is confused as to what “implicitly typed local variables” are, confusing local type inference (which they are) with dynamic typing (which they are not). Many commenters also suffer from this confusion. Other commenters rightly note that the inferred type is not always the type the programmers wants (particularly important in the presence of sub-typing). Furthermore, type inference harms readability. I'm reminded of recent discussion on the PLT Scheme mailing list on the merits of local and global type inference. The consensus there seems to be that while local type inference is useful, global inference is not.

So, wise people, what is the future of type inference? How useful is it really, especially when we look at type systems that go beyond what H-M can handle? How are we going to get working programmers to use it, and understand it? Do we need better tool support? Do we have any hope of better education for the average programmer?

program verification: the very idea

James H. Fetzer's Program Verification: The Very Idea (1988) is one of the two most frequently cited position papers on the subject of program verification. The other one is Social Processes and Proofs by De Millo, Lipton, and Perlis (1979), previously discussed on LtU. Fetzer's paper generated a lot of heated discussion, both in the subsequent issues of CACM and on Usenet.

It's not clear to me what all the fuss is about. Fetzer's main thesis seems pretty uncontroversial:

The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility.

(See also part I, part II, and part III.)

April 1st special: The War of the Worlds

Conrad Barski has posted a sneak peak from his upcoming Lisp textbook/comic: Land of Lisp.

The first slides may seem unrelated, but boy does the message sting when you reach the ending...

FPers will be quick to note, of course, that this being April Fools' Day the whole thing is a joke and we can all go back to Haskell...

Social Processes and Proofs of Theorems and Programs

A paper that was mentioned in the discussion forum, by Richard A. De Millo, Richard J. Lipton, Alan J. Perlis, 1979.
It is argued that formal verifications of programs, no matter how obtained, will not play the same key role in the development of computer science and software engineering as proofs do in mathematics. Furthermore, the absence of continuity, the inevitability of change, and the complexity of specification of significantly many real progarms make the formal verification process difficult to justify and manage. It is felt that ease of formal verification should not dominate program language deisgn.

Good Ideas, Through the Looking Glass

Niklaus Wirth. Good Ideas, Through the Looking Glass, IEEE Computer, Jan. 2006, pp. 56-68.

An entire potpourri of ideas is listed from the past decades of Computer Science and Computer Technology. Widely acclaimed at their time, many have lost in splendor and brilliance under today’s critical scrutiny. We try to find reasons. Some of the ideas are almost forgotten. But we believe that they are worth recalling, not the least because one must try to learn from the past, be it for the sake of progress, intellectual stimulation, or fun.

A personal look at some ideas, mostly from the field of programming languages. Some of Wirth's objections are amusing, some infuriating - and some I agree with...

LtU readers will obviously go directly to sections 4 (Programming Language Features) and 6 (Programming Paradigms). Here are a few choice quotes:

It has become fashionable to regard notation as a secondary issue depending purely on personal taste. This may partly be true; yet the choice of notation should not be considered an arbitrary matter. It has consequences, and it reveals the character of a language. [Wirth goes on to discuss = vs. == in C...]

Enough has been said and written about this non-feature [goto] to convince almost everyone that it is a primary example of a bad idea. The designer of Pascal retained the goto statement (as well as the if statement without closing end symbol). Apparently he lacked the courage to break with convention and made wrong concessions to traditionalists. But that was in 1968. By now, almost everybody has understood the problem, but apparently not the designers of the latest commercial programming languages, such as C#.

The concept that languages serve to communicate between humans had been completely blended out, as apparently everyone could now define his own language on the fly. The high hopes, however, were soon damped by the difficulties encountered when trying to specify, what these private constructions should mean. As a consequence, the intreaguing idea of extensible languages faded away rather quickly.

LtU readers are also going to "enjoy" what Wirth has to say about functional programming...

(Thanks Tristram)

Machine Obstructed Proof

From ICFP '06, the 1st informal Workshop on Mechanizing Metatheory comes Nick Benton's "Machine Obstructed Proof: How many months can it take to verify 30 assembly instructions?". It is a one page paper, but seems deserving of some notice.

Nick Benton offers a critique of Coq from the standpoint of an inexperienced user, although I am not sure I would really categorize Benton as "inexperienced". Some interesting quotes:

  • "...I have rarely felt as stupid and frustrated as I did during my first few weeks using Coq."
  • "Tactical theorem proving is like an extreme form of aspect-oriented programming. This is not A Good Thing...."
  • "Just having intermediate stages of the work in a computerized form...proved a major benefit."
  • "Automated proving is not just a slightly more fussy version of paper proving and neither...is it really like programming."
  • "...but the payoff really came the second time I used Coq: I was able to prove some elementary but delicate results...in just a day or so."

[On edit: moved to a story from a forum post. Sorry. - TM]

Is "post OO" just over?

While studying the conference program of the upcoming OOPSLA 2006 I discovered under the category "essay" an author who has quite something critical to say about AOP:

Aspect-oriented programming is discussed as a promising new technology. Like object-oriented programming, it is beginning to pervade all areas of software engineering. With its growing popularity, practitioners and academics alike are beginning to wonder whether they should start looking into or it, or otherwise risk having missed an important development. The author of this essay finds that much of aspect-oriented programming's success seems to be based on the conception that it improves both modularity and the structure of code, while in fact, it actually works against the primary purposes of the two, namely independent development and understandability of programs. Not seeing any way of fixing this situation, he thinks the success of aspect-oriented programming to be paradoxical.

This is not just another internet rant about the latest PL hype but the author, Friedrich Steimann, had done interesting work about AOP before. In particular his latest paper about typed AOP:

AOP and the antinomy of the liar

but also his award winning former critical AOP review:

Domain models are aspect free

XML feed