General

How Does Our Language Shape The Way We Think?

Seems like its been a while since we last grated our linguistic experts. From How Does Our Language Shape The Way We Think? by Lera Boroditsky, the age-old discussion gets reopened:

Such a priori arguments about whether or not language shapes thought have gone in circles for centuries, with some arguing that it's impossible for language to shape thought and others arguing that it's impossible for language not to shape thought. Recently my group and others have figured out ways to empirically test some of the key questions in this ancient debate, with fascinating results.
Being the Programming Languages weblog, issues surrounding languages in general are somewhat tangential. Unlike the linguists, it is generally accepted that programming language syntax and semantics does have a significant effect on design and construction of programs. But like liguistics, one would be hard pressed to isolate the language from the community (culture). My take would be that a large measure of the benefit of looking at new PLs derives from being exposed to differing communities - not just in learning the details of a language.

Peter Landin

I was just forwarded a message that Peter Landin passed away yesterday.

From: Edmund Robinson
Date: 4 June 2009 09:10:11 GMT+00:00
Subject: Peter Landin

I am very sorry to inform you that Peter Landin died yesterday of natural causes.

For those members who are several generations away from Peter's early contributions, he was one of the major figures in the UK at the time that Computer Science was beginning to establish itself as a discipline. Some of his papers from 40 years ago are essential reading for any serious student of programming languages as still the simplest and clearest exposition of ideas that remain fundamental. The ideas in his papers were truly original and beautiful, but Peter never had a simplistic approach to scientific progress, and would scoff at the idea of individual personal contribution. Some of his own greatest ontribution to the field was as part of a golden nexus of work on programming languages in the UK in the late 60's and early 70's, containing Dana Scott and Christopher Strachey and others as well as Peter. The ideas they developed through their discussions truly lifted the study of programming languages to another level, and are now part of the bedrock of the subject.

Landin was one of the founders of our field, and did a lot of work of lasting value. We've discussed his papers here many times before, even though some of them were written decades ago. He did work that cast a long shadow, or phrased better, did work that illuminated wide vistas.

Computing Needs Time

Edward A. Lee, Computing Needs Time, Communications of the ACM, Volume 52, Issue 5 (May 2009).

The foundations of computing, rooted in Turing, Church, and von Neumann, are about the transformation of data, not about physical dynamics. This paper argues that we need to rethink the core abstractions if we really want to integrate computing with physical processes. In particular, I focus on a key aspect of physical processes that is almost entirely absent in computing, the passage of time. This is not just about “real-time systems,” which accept the foundations and retrofit them with temporal properties. Although that technology has much to contribute, I will argue that it cannot solve the problem alone because it is built on flawed foundations.

The section of most direct relevance to LtU is probably section 5.2 on programming languages, which opens by saying:

Programming languages provide an abstraction layer above the ISA. If the ISA is to expose selected temporal properties, and programmers wish to exploit this, then one approach would be to reflect these in the languages.

Also potentially of interest to the LtU readership is section 5.4 on formal methods, which closes by asserting that

...type systems are formal methods that have had enormous impact. What is needed is time systems with the power of type systems.

Note: The "Tagged Signal" meta-model of computation mentioned in section 3 of the paper was previously discussed on LtU here.

Questions Five Ways

I think one of the better ideas I've had on this blog is my Questions Five Ways series. For each post, I'll ask a guiding question of five leading hackers, some from the Ruby community and some from outside it.

So far the questions were about concurrency, code reading, and static code analysis & testing. I understand Pat is interested in hearing suggestion for future topics.

I find the discussion about concurrency interesting. Naturally we have been urging people to look at Erlang for quite awhile, and Haskell parallelism is also a frequent topic here. It is nice to see how these things are becoming more mainstream. It also means it is about time we moved on to new things...

Code reading is, of course, near and dear to me.

Semantics of Memory Management for Polymorphic Languages

In Semantics of Memory Management for Polymorphic Languages (1997) Greg Morrisett and Robert Harper

...present a static and dynamic semantics for an abstract machine that evaluates expressions of a polymorphic programming language. Unlike traditional semantics, our abstract machine exposes many important issues of memory management, such as value sharing and control representation. We prove the soundness of the static semantics with respect to the dynamic semantics using traditional techniques. We then show how these same techniques may be used to establish the soundness of various memory management strategies, including type-based, tag-free garbage collection; tail-call elimination; and environment strengthening.

This should keep the formal semantics LtUers happy for a little while. But is all the machinery necessary? Is there an easier way to prove that garbage can be thrown out?

Jonathon Shapiro Wraps Up BitC

In an email to the BitC developer mailing list Jonathon Shapiro announced that he is wrapping up development on BitC.

Some of you will have noticed that I have been conspicuously silent over the
last three or four weeks. I have spent much of that time airborne, or in
interviews at Google, Microsoft, and DARPA.

After a fair bit of soul-searching, I have decided to accept a fairly senior
position at Microsoft associated with the Midori project. The current plan
has me starting there at the beginning of August.

This means, among other issues, that we will be wrapping up the BitC
project. While I will be trying hard to get all of the planned features for
the initial release completed before I depart, that may not turn out to be
possible. I have asked Microsoft if we can keep the various web sites alive
for archival access and the mailing list, but I should also ask if there is
anyone out there who would be interested to assume more active stewardship
of the BitC project. I emphasize that unless management at Microsoft
concludes otherwise, I will no longer be able to participate actively in
these discussions. Also, in the event that MS does not permit archival
maintainence, would somebody be willing to take over hosting the content?

I have also asked MS for permission to publish papers about BitC on my own
time. They granted this permission to Swaroop, and I see no reason that they
should decline this, but my position there is a bit more sensitive and they
may see issues that I do not.

In the meantime, I need to get back to packing and hacking.

Best regards,

Jonathan

Best of luck shap! And best of luck BitC!

PLOT: Programming Language for Old Timers

PLOT: Programming Language for Old Timers by David Moon, 2006-2008.

Programming Language for Old Timers (PLOT) is a new dialect of Lisp designed by Dave Moon in February 2006, and thoroughly revised and simplified November 2007 and March 2008. I have been developing PLOT as a hobby, with the idea of for once having a programming language which does everything the right way. You know it is right when both simplicity and power are maximized, while at the same time confusion and the need for kludges are minimized.

Open access at MIT and Harvard

MIT now has recently adopted an open access policy, which means that all MIT faculty grant MIT the right to make their scholarly publications available on an open access basis, with the possibility of a waiver on an individual case-by-case basis.

Story via John Baez. This follows a similar initiative at Harvard. Both announcements are non-retrospective.

Barbara Liskov Wins Turing Award

News flash: Barbara Liskov Wins Turing Award. The full citation:

Barbara Liskov has led important developments in computing by creating and implementing programming languages, operating systems, and innovative systems designs that have advanced the state of the art of data abstraction, modularity, fault tolerance, persistence, and distributed computing systems.

The Venus operating system was an early example of principled operating system design. The CLU programming language was one of the earliest and most complete programming languages based on modules formed from abstract data types and incorporating unique intertwining of both early and late binding mechanisms. ARGUS extended many of the CLU ideas to distributed programming, and incorporated the first versions of nested transactions to maintain predictable consistencies. Other advances include solutions elegantly combining theory and pragmatics in the areas of decentralized information flow, replicated storage and caching of persistent objects, and modular upgrading of distributed systems. Her contributions have been incorporated into the practice of programming, thereby influencing many of the most important systems used today: for programming, specification, systems design, and distributed architectures.

Here is a DDJ interview, in which Liskov mentions CLU and data abstraction as the accomplishment she is most proud of.

And here are searches of the LtU archives for Liskov and CLU.

Languages and security: a short reading list

Ivan Krstić, former director of One Laptop per Child and all around computer security guru, has a few humorous thoughts on the current intersection between security and programming language design in Languages and security: a short reading list.

If I had to grossly overgeneralize, I’d say people looking at language security fall in roughly three schools of thought:

1. The "My name is Correctness, king of kings" people say that security problems are merely one manifestation of incorrectness, which is dissonance between what the program is supposed to do and what its implementation actually does. This tends to be the group led by mathematicians, and you can recognize them because their solutions revolve around proofs and the writing and (automatic) verification thereof.

2. The "If you don’t use a bazooka, you can’t blow things up" people say that security problems are a byproduct of exposing insufficiently intelligent or well-trained programmers to dangerous language features that don’t come with a safety interlock. You can identify these guys because they tend to make new languages that no one uses, and frequently describe them as "like popular language X but safer".

3. The "We need to change how we fundamentally build software" people say that security problems are the result of having insufficiently fine-grained methods for delegating individual bits of authority to individual parts of a running program, which traditionally results in all parts of a program having all the authority, which means the attack surface becomes a Cartesian product of every part of the program and every bit of authority which the program uses. You can spot these guys because they tend to throw around the phrase "object-capability model".

Now, while I'm already grossly overgeneralizing, I think the first group is almost useless, the second group is almost irrelevant, and the third group is absolutely horrible at explaining what the hell they’re talking about.

Tongue in cheek? Absolutely, but probably not that far off when it comes to the languages that the mainstream uses today (except, arguably, for the quibble that #2 is applied to some extent in all of the most popular "managed" language runtimes).

As the name of article suggests it has some good links for further study into current lines of research.

Which directions are likely to be the most fruitful in the coming years? And what other directions are being missed?

XML feed