Critiques

The Three Laws of Programming Language Design

Joe Armstrong(of Erlang) while reviewing Elixir(Ruby like language that compiles to Erlang Virtual Machine) states his Three Laws of Programming Language Design.

  • What you get right nobody mentions.
  • What you get wrong, people bitch about.
  • What is difficult to understand you have to explain to people over and over again.

Some language get some things so right that nobody ever bothers to mention them, they are right, they are beautiful, they are easy to understand.

The wrong stuff is a bitch. You boobed, but you are forgiven if the good stuff outweighs the bad. This is the stuff you want to remove later, but you can’t because of backwards compatibility and some nitwit has written a zillion lines of code using the all the bad stuff.

The difficult to understand stuff is a real bummer. You have to explain it over and over again until you’re sick, and some people never get it, you have to write hundred of mails and thousands of words explaining over and over again why this stuff means and why it so. For a language designer, or author, this is a pain in the bottom.

Oleg: An argument against call/cc

Oleg provides various arguments against including call/cc as a language feature.

We argue against call/cc as a core language feature, as the distinguished control operation to implement natively relegating all others to libraries. The primitive call/cc is a bad abstraction -- in various meanings of `bad' shown below, -- and its capture of the continuation of the whole program is not practically useful. The only reward for the hard work to capture the whole continuation efficiently is more hard work to get around the capture of the whole continuation. Both the users and the implementors are better served with a set of well-chosen control primitives of various degrees of generality with well thought-out interactions.

Google's "The Future of JavaScript" internal memo leaked

Note: Saw this on Sunday (9/11), but waited for it to go viral before posting it here.

A leaked Google memo, The Future of JavaScript, from November 2010 is being circulated around the Internet, outlining Google's supposed technical strategy for Web programming languages. Google plans to improve JavaScript, while also creating a competitor to JavaScript, Dart (ex-Dash), that it hopes will be the new lingua franca of the Web.

Ironically, I saw this leak via a Google Alert keyword search. It has propagated to at least Github, the Dzone social network, The Register and Information Week since Sunday.

The Trouble with Erlang

Tony Arcieri, author of the Reia Ruby-like language for the Erlang BEAM platform, wrote a piece in July, The Trouble with Erlang (or Erlang is a ghetto), bringing together a long laundry list of complaints about Erlang and the concepts behind it, and arguing at the end that Clojure now provides a better basis for parallel programming in practice.

While the complaints include many points about syntax, data types, and the like, the heart of the critique is two-fold: first, that Erlang has terrible problems managing memory and does not scale as advertised, and that these failures partly follow from "Erlang hat[ing] state. It especially hates shared state." He points to the Goetz and Click argument in Concurrency Revolution From a Hardware Perspective (2010) that local state is compatible with the Actors model. He further argues that SSA as it is used in Erlang is less safe than local state.

Programming and Scaling

Programming and Scaling, a one-hour lecture by Alan Kay at his finest (and that's saying something!)

Some of my favorite quotes:

  • "The biggest problem we have as human beings is that we confuse our beliefs with reality."
  • "We could imagine taking the internet as a model for doing software modules. Why don't people do it?" (~00:17)
  • "One of the mistakes that we made years ago is that we made objects too small." (~00:26)
  • "Knowledge in many cases trumps IQ. [Henry] Ford was powerful because Isaac Newton changed the way we think." (~00:28)
  • "Knowledge is silver. Outlook is gold. IQ is a lead weight." (~00:30)
  • "Whatever we [in computing] do is more like what the Egyptians did. Building pyramids, piling things on top of each other."
  • "The ability to make science and engineering harmonize with each other - there's no greater music." (~00:47)

And there are some other nice ideas in there: "Model-T-Shirt Programming" - software the definition of which fits on a T-shirt. And imagining source code sizes in terms of books: 20,000 LOC = a 400-page book. A million LOC = a stack of books one meter high. (Windows Vista: a 140m stack of books.)

Note: this a Flash video, other formats are available.

Memory Models: A Case for Rethinking Parallel Languages and Hardware, CACM, August 2010

Memory Models: A Case for Rethinking Parallel Languages and Hardware by Sarita V. Adve and Hans-J. Boehm

This is a pre-print of the actual version.

The era of parallel computing for the masses is here, but writing correct parallel programs remains far more difficult than writing sequential programs. Aside from a few domains, most parallel programs are written using a shared-memory approach. The memory model, which specifies the meaning of shared variables, is at the heart of this programming model. Unfortunately, it has involved a tradeoff between programmability and performance, and has arguably been one of the most challenging and contentious areas in both hardware architecture and programming language specification. Recent broad community-scale efforts have finally led to a convergence in this debate, with popular languages such as Java and C++ and most hardware vendors publishing compatible memory model specifications. Although this convergence is a dramatic improvement, it has exposed fundamental shortcomings in current popular languages and systems that prevent achieving the vision of structured and safe parallel programming.

This paper discusses the path to the above convergence, the hard lessons learned, and their implications. A cornerstone of this convergence has been the view that the memory model should be a contract between the programmer and the system - if the programmer writes disciplined (data-race-free) programs, the system will provide high programmability (sequential consistency) and performance. We discuss why this view is the best we can do with current popular languages, and why it is inadequate moving forward. We then discuss research directions that eliminate much of the concern about the memory model, but require rethinking popular parallel languages and hardware. In particular, we argue that parallel languages should not only promote high-level disciplined models, but they should also enforce the discipline. Further, for scalable and efficient performance, hardware should be co-designed to take advantage of and support such disciplined models. The inadequacies of the state-of-the-art and the research agenda we outline have deep implications for the practice, research, and teaching of many computer science sub-disciplines, spanning theory, software, and hardware.

Pure and Declarative Syntax Definition: Paradise Lost and Regained, Onward 2010

Pure and Declarative Syntax Definition: Paradise Lost and Regained by Lennart C. L. Kats, Eelco Visser, Guido Wachsmuth from Delft

Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.

I haven't compared this version with the Onward 2010 version, but they look essentially the same. It seems timely to post this paper, considering the other recent story Yacc is dead. There is not a whole lot to argue against in this paper, since we all "know" the other approaches aren't as elegant and only resort to them for specific reasons such as efficiency. Yet, this is the first paper I know of that tries to state the argument to software engineers.

For example, the Dragon Book, in every single edition, effectively brushes these topics aside. In particular, the Dragon Book does not even mention scannerless parsing as a technique, and instead only explains the "advantages" of using a scanner. Unfortunately, the authors of this paper don't consider other design proposals, either, such as Van Wyk's context-aware scanners from GPCE 2007. It is examples like these that made me wish the paper was a bit more robust in its analysis; the examples seem focused on the author's previous work.

If you are not familiar with the author's previous work in this area, the paper covers it in the references. It includes Martin Bravenboer's work on modular Eclipse IDE support for AspectJ.

Joe Duffy: A (brief) retrospective on transactional memory

A (brief) retrospective on transactional memory, by Joe Duffy, January 3rd, 2010. Although this is a blog post, don't expect to read it all on your lunch break...

The STM.NET incubator project was canceled May 11, 2010, after beginning public life July 27, 2009 at DevLabs. In this blog post, written 4 months prior to its cancellation, Joe Duffy discusses the practical engineering challenges around implementing Software Transactional Memory in .NET. Note: He starts off with a disclaimer that he was not engaged in the STM.NET project past its initial working group phase.

In short, Joe argues, "Throughout, it became abundantly clear that TM, much like generics, was a systemic and platform-wide technology shift. It didn’t require type theory, but the road ahead sure wasn’t going to be easy." The whole blog post deals with how many implementation challenges platform-wide support for STM would be in .NET, including what options were considered. He does not mention Maurice Herlihy's SXM library approach, but refers to Tim Harris's work several times.

There was plenty here that surprised me, especially when you compare Concurrent Haskell's STM implementation to STM.NET design decisions and interesting debates the team had. In Concurrent Haskell, issues Joe raises, like making Console.WriteLine transactional, are delegated to the type system by the very nature of the TVar monad, preventing programmers from writing such wishywashy code. To be honest, this is why I didn't understand what Joe meant by "it didn't require type theory" gambit, since some of the design concerns are mediated in Concurrent Haskell via type theory. On the other hand, based on the pragmatics Joe discusses, and the platform-wide integration with the CLR they were shooting for, reminds me of The Transactional Memory / Garbage Collection Analogy. Joe also wrote a briefer follow-up post, More thoughts on transactional memory, where he talks more about Barbara Liskov's Argus.

Sapir-Whorf 70 years on

Many a people have looked at Programming Lanugages through the Sapir-Whorf lens so it's not uncommon to find people making PL claims using that hypothesis. Also not surprisingly, the topic keeps re-appearing here on LtU.

This week's NY Times magazine has an article titled Does Your Language Shape How You Think? by Guy Deutscher which starts as a retrospective on Whorf but then goes into what new research has shown.

Some 50 years ago, the renowned linguist Roman Jakobson pointed out a crucial fact about differences between languages in a pithy maxim: “Languages differ essentially in what they must convey and not in what they may convey.” This maxim offers us the key to unlocking the real force of the mother tongue: if different languages influence our minds in different ways, this is not because of what our language allows us to think but rather because of what it habitually obliges us to think about.

...

When your language routinely obliges you to specify certain types of information, it forces you to be attentive to certain details in the world and to certain aspects of experience that speakers of other languages may not be required to think about all the time. And since such habits of speech are cultivated from the earliest age, it is only natural that they can settle into habits of mind that go beyond language itself, affecting your experiences, perceptions, associations, feelings, memories and orientation in the world.

Bart De Smet on .NET 4's System.Interactive library

Microsoft employee Bart De Smet, who has a widely trafficked blog, has been writing a lot in the past few months about a new library being designed by his group at Microsoft. Here is a whole truckload of blogpost links, in chronological order, which appears to be how Bart intended folks to read it:

Dec 26, 2009: More LINQ with System.Interactive – The Ultimate Imperative
Dec 27, 2009: More LINQ with System.Interactive – Exceptional Exception Handling
Dec 28, 2009: More LINQ with System.Interactive – Sequences under construction
Dec 29, 2009: More LINQ with System.Interactive – Exploiting the code = data relationship
Dec 30, 2009: More LINQ with System.Interactive – More combinators for your Swiss Army Knife
Jan 01, 2010: The Essence of LINQ – MinLINQ
Jan 07, 2010: More LINQ with System.Interactive – Functional fun and taming side-effects

I don't usually read blogs, but I thought this was a pretty cogent series of posts. Also, judging by how interested LtU and the surrounding blogosphere community was from Erik Meijer's presentation on the Rx framework at the JVM Language Summit 2009, I figured people would like this as well.

XML feed