archives

Exceptional syntax

A nice paper apropos of tail calls and exceptions:
Nick Benton and Andrew Kennedy. 2001. Exceptional syntax. Journal of Functional Programming 11(4): 395-410.

From the points of view of programming pragmatics, rewriting and operational semantics, the syntactic construct used for exception handling in ML-like programming languages, and in much theoretical work on exceptions, has subtly undesirable features. We propose and discuss a more well-behaved construct.

The undergraduate language course: what to do?

Thinking about teaching our undergraduate PL next semester, I've been digging up lots of suddenly interesting readings. The teaching about PL project links to much information, and Wadler's and Felleisen et al.'s critiques of SICP were enlightening as well.

Most recently I stumbled upon Joseph Bergin's writings on teaching, in particular ""The undergraduate language course: what to do?" (SIGPLAN Notices 36(6):20-22, 2001), which describes a taxonomy of course approaches: "historical", "smorgasbord", "interpreter", and "principles". Also relevant is a recent article "In Computer Science, A Growing Gender Gap: Women Shunning a Field Once Seen as Welcoming" by Marcella Bombardieri (Boston Globe 2005-12-18).

(I'm sorry I couldn't find Wadler's and Bergin's articles freely available online.)

Insights on teaching computer programming

I have been teaching programming for nearly ten years now, to university students from the first to the fourth year, to both CS majors and non-CS majors (all in technical fields). I have tried to distill results from programming language research into these courses (and wrote a book in the process, CTM). Recently, I had two insights:

1. The best year in which to teach a programming course based on programming concepts is the second year. In the first year, students are not mature enough. In the third and later years, students get conservative. In the second year, they have enough maturity to understand the concepts and enough openness to appreciate them. At UCL, we have been doing this for two years now for all engineering students (more than 300 in Fall 2005) using a first-year course based on Java, and it works remarkably well (better than I expected).

2. Students can be taught programming semantics in the second year. This can succeed if: (1) the semantics requires very little mathematical baggage, just sets and functions, (2) the semantics is factored so it can be taught incrementally, and (3) the semantics is simple and uncluttered so that students can work out a program's execution with paper and pencil. The semantics of CTM is one example that fits these conditions. The students consider the semantics the hardest part of the course. I think it's because they've never been exposed to this kind of formalism, so they are a bit unsettled by it. But I am convinced that students in any technical field should be taught programming language semantics at least once in their careers. It should be as basic as algebra or calculus.

What do you think? I would appreciate hearing your experience and comments. Here are the course notes of the latest version of my course (they are in French; there are English course notes too but they are not as nice for the second year).

Understanding the Dynamic Nature of a Program

What is the latest thinking on how to deal with the fact that the dynamic nature of a program can be so terribly different from the static source code representation? Surely, we want to be able to better represent the dynamism of our systems for many reasons, not the least of which is debugging.

(Miscellaneous, random, uneducated sampling follows) There are various systems for examing the trace of dynamic systems or debugging anything, which are all useful work. Can we go further? How about a system that lets me understand dynamism from the start of development, through the creation of linear source code, and on to debugging. Maybe visual programming can help (or does it fall victim to the old adage, "now they have two problems")? Visula takes a swing by giving a 2D view of time and data. Maybe the best we can do is use models to try to force us to understand our own systems and avoid getting into bad places.

Hasn't this issue been around since folks invented computers? What approaches arleady exist? What does/n't work? What is tenable in the near future, perhaps?