Lisp-Stat does not seem to be in good health lately.

The Journal of Statistical Software http://www.jstatsoft.org/ has a Special Volume devoted to the topic: "Lisp-Stat, Past, Present and Future".

In the world of statistics, it appears that XLISP-STAT http://www.stat.uiowa.edu/~luke/xls/xlsinfo/xlsinfo.html has lost out to the S family of languages: S / R / S-plus:

In fact, the S languages are not statistical per se; instead they provide an environment within which many classical and modern statistical techniques have been implemented.

An article giving an excellent overview of the special volume is: "The Health of Lisp-Stat" http://www.jstatsoft.org/v13/i10/v13i10.pdf

Some of the articles describe the declining user base of the language due to defections:

whilst other articles describe active projects using XLisp-Stat, often leveraging the power of the language, in particular for producing dynamic graphics.

The S family of languages, originally developed at Bell Labs, has much to recommend it. S is an expression language with functional and class features. However, as the original creator and main developer of XLisp-Stat, (and now R developer) Luke Tierney explains in "Some Notes on the Past and Future of Lisp-Stat" http://www.jstatsoft.org/v13/i09/v13i09.pdf ,

"While R and Lisp are internally very similar, in places where they differ the design choices of Lisp are in many cases superior."

Recovering resources in the pi-calculus

Recovering resources in the pi-calculus

Although limits of resources such as memory or disk usage
are one of the key problems of many communicating applications, most
process algebras fail to take this aspect of mobile and concurrent systems
into account. In order to study this problem, we introduce the Controlled
pi-calculus, an extension of the pi-calculus with a notion of recovery of
unused resources with an explicit (parametrized) garbage-collection and
dead-process elimination. We discuss the definition of garbage-collection
and dead-process elimination for concurrent, communicating applications,
and provide a type-based technique for statically proving resource
bounds. Selected examples are presented and show the potential of the
Controlled pi-calculus.

Games for Logic and Programming Languages: Etaps'05

In the past decade game semantics has emerged as a new and successful paradigm in the field of semantics of logics and programming languages. Game semantics made its breakthrough in computer science in the early 90s, providing an innovative set of methods and techniques for the analysis of logical systems. Subsequently, game-semantic techniques led to the development of the first syntax-independent fully-abstract models for a variety of programming languages, ranging from the purely functional to languages with non-functional features such as control, references or concurrency. There are also emerging connections between game semantics and other semantic theories, notably theories of concurrency such as the pi-calculus, and traditional tree-based semantics of lambda calculi. In addition to semantic analysis, an algorithmic approach to game semantics has recently been developed, with a view to applications in computer assisted verification and program analysis.

The aim of the workshop is to provide opportunity for interaction with other Etaps'05 events and to become a major meeting point in the research area of Game Semantics and its applications.

In case you are in Edinburgh on April 2-3, here is the workshop program.

Fold Must Fold!

The Fate Of LAMBDA in PLT Scheme v300
or
Lambda the Ultimate Design Flaw

About 30 years ago, Scheme had FILTER and MAP courtesy of Lisp hackers who missed them from their past experience. To this collection, Scheme added a lexically-scoped, properly-functioning LAMBDA. But, despite of the PR value of anything with Guy Steele's name associated with it, we think these features should be cut from PLT Scheme v300.

We think dropping FILTER and MAP is pretty uncontroversial; (filter P S) is almost always written clearer as a DO loop (plus the LAMBDA is slower than the loop). Even more so for (map F S). In all cases, writing the equivalent imperative program is clearly beneficial.

Why drop LAMBDA? Most Scheme users are unfamiliar with Alonzo Church (indeed, they don't even know that he was related to Guy Steele), so the name is confusing; also, there is a widespread misunderstanding that LAMBDA can do things that a nested function can't -- we still recall Dan Friedman's Aha! after we showed him that there was no difference! (However, he appears to have since lapsed in his ways.) Even with a better name, we think having the two choices side-by-side just requires programmers to think about their program; not having the choice streamlines the thought process, and Scheme is designed from the ground up to, as much as possible, keep programmers from thinking at all.

So now FOLD. This is actually the one we've always hated most, because, apart from a few examples involving + or *, almost every time we see a FOLD call with a non-trivial function argument, we have to grab pen and paper and imagine the *result* of a function flowing back in as the *argument* to a function. Plus, there are *more* arguments coming in on the side! This is all absurdly complicated. Because almost all the examples of FOLD we found in practice could be written as a simple loop with an accumulator, this style should be preferred, perhaps with us providing a simple helper function to abstract away the boilerplate code. At any rate, FOLD must fold.

--The PLT Scheme Team

This from the PLT Scheme mailing list. Clearly Guido's visionary thinking is having repercussions outside the world of Python. I, for one, welcome this change that can only benefit the Scheme language.

Starlog

Starlog is a pure-logic programming language designed to overcome some of the problems inherient in traditional approaches to logic programming.

Starlog was designed as an alternative approach to programming in pure logic. Many of the failings of other logic programming languages can be overcome using the bottom-up evaluation technique. Bottom-up evaluation avoids problems associated with cyclic relations, and allows side-effects to be performed without compromising the declarative semantics of a program.

Using Starlog we advocate a data structure free programming style. That is, predicates in Starlog programs do not contain compound terms for their arguments. This greatly simplifies logic programs and their implementations, lowers the learning curve for new programmers, and forces the programmer to think of all program constructs as relations.

Python metaprogramming with decorators

This recipe for using decorators to load data structures caught my eye (thanks to the Daily Python URL) as an interesting metaprogramming technique.

Decorators are function transformers that can be applied declaratively to normal Python methods and functions using the much-disputed decorator syntax (see this previous LtU thread). This recipe uses decorators that return the supplied function unmodified, but as a side-effect register it in a dictionary of handlers for various value conversion operations. The decorator code itself is called when the module is loaded, so the dictionary is populated "on demand" at run-time. This makes an interesting contrast with compile-time metaprogramming techniques such as C++ template metaprogramming and the use of code generators and preprocessors.

I do not believe that C# attributes could easily be used in a similar way, as unlike Python decorators these receive no information about the entities they are attached to when they are instantiated.

How to remove a dynamic prompt: static and dynamic delimited continuation operators are equally expressible

The report (by Oleg Kiselyov) shows that shift, control, shift0, etc. delimited continuation operators are macro-expressible in terms of each other. The report thus confirms the result first established by Chung-chieh Shan in Shift to Control. The operators shift, control, control0, shift0 are the members of a single parameterized family, and the standard CPS is sufficient to express their denotational semantics.

The report uses a more uniform method and it formally proves that 'control' implemented via 'shift' indeed has its standard reduction semantics. It is common knowledge that first-class continuations are quite tricky -- and delimited continuations are trickier still. Therefore, a formal proof is a necessity.

On the practical side, the report shows the simplest known Scheme implementations of control, shift0 and control0 (similar to `cupto'). The method in the report lets us design 700 more delimited control operators, which compose stack fragments in arbitrary ways.

I love this sort of thing, and since section 4 includes Scheme code, you can try to skip the theory if you find it intimidating.

I know this stuff can look a bit hairy. If there's interest, I hope Oleg would agree to help people new to this sort of material in understanding sections 2 and 3. But you have to ask nicely...

Book: The Standard ML Basis Library

SML is conservative by nature, erring on the side of rigor. The Basis Library has been a long time in the making, so I guess it's not surprising that it didn't raise much fanfare with the publication of The Standard ML Basis Library by Gansner & Reppy last year. In slowly plodding my way thru CTM in Alice-ML, figured it couldn't hurt to acquire the book.

Other than about 100 pages, the book is a hard copy of the manual pages which are available online. (To put a positive spin on the matter, we should probably be happy that their arduous work is freely available). I do prefer a hard copy reference, especially for a subject that is not likely to be outdated anytime soon. The book provides some additional background info and example usage here and there, but it doesn't have a strong narrative - meandering thru what was probably a number of decision points along the way. Been helpful in spots - but then I'm still trying to grok the language as a whole.

Python: Generator methods

This recipe enables the use of the yield statement within a method by decorating that method with a wrapper for a generator object.

Say what you will about Python, it does provide us with clever tricks such as this one.

UCPy: Reverse Engineering Python

Interesting paper from PyCon 2003 (so yes, it's old news - but new on Lambda, as far as I can see): UCPy: Reverse-Engineering Python.

The project partly entails replacing the "CISC-style" Python VM with a much smaller "RISC-style" VM. The authors' comments on this decision are worth considering in the light of recent discussions about the design of the Parrot VM.