LtU Forum

Garbage collecting computations

Imagine a process starting two computations, and as soon as one of them returns, abandoning the other.

One way to implement this is to require the client process to explicitly request cancellation of the second computation. I see this as similar to manual management of memory, with all the benefits and drawbacks.

Another way is to use something similar to garbage collector - the client just forgets about the second computation, and it will be (eventually) cancelled. I like to think about this as a GC because (as with memory GC) there are no semantic guarantees, just a pragmatic one - the infrastructure will have an option to reduce the consumption of resources, but does not guarantee it will use it (e.g., the resources are in abundance - a lot of free physical memory or idle processors).

Similar to memory GC, there might be ways for the process to detect the infrastructure's decision - in case of memory by means of finalizer code being called, in case of computation by detecting the effects of the computation.

What is not so similar, is the nature of references. In case of memory, clients refer to memory blocks. In case of computations, the direction is reversed - the computation refers to the continuation supplied by the client. That looks like a fatal blow...

I vaguely remember reading about something like that in the context of speculative computations, but was unable to find it again. Any ideas where to look?

How widespread are inhouse DSLs?

A student asked me this question, and apart from saying that quite a large percentage of large organizations use in house DSLs, I couldn't give any details, nor am I aware of any research.

So if anyone came across a survey or research report that gives useful (hopefully current) information about DSL use, I'd be glad to hear about it.

Denotational semantics of a DSL?

I am really at a loss here... I was just trying to understand if one can define more or less formally what is a DSL.

Is it semantics of DSLs that differentiates them from GP PLs? Or is it only pragmatics ("you can do the same in any GP but it's easier/faster/cheaper to do in this DSL")?

In other words, can we define as DSLs those PLs whose semantic domains are, uh, domain specific? By a semantic domain here I mean the range (or codomain) of a semantic function, and the domain of this function is a syntactic domain... All these puns are not really intended :-(

...

After several unsuccessful attempts to pursue this path, I came to another "definition" - DSLs are the PLs with the pragmatics massively dominating the semantics :-)

PS: I would be very interested in seeing any references to papers on denotational semantics of some DSL.

Turing Extender Language (TXL)

TXL has been mentioned briefly before on LTU. There is a recent paper in the LTDA Proceedings 2004 about the motivations of TXL, where the language designers iteratively modified their grammar to suit the intuitive expectations of their users.

Turing uses an asterisk (*) to denote the upper bound of a parameter array (as in array 1..* of int). Users therefore began to write s(3..*) to mean the substring from position 3 to the end of the string, s(1..*-1) to mean the substring from the first position to the second last, s(*-1..*) to mean the substring consisting of the last two characters, and so on. As these forms evolved, the language was modified to adapt to the users’ expectations.

The approach above does sound fruitful, if we want to achieve higher programmer productivity, rapid iterative design of the tools used has a radical chance of making an impact, instead of the path of taking tens of years before a language becomes a productive medium.

Incidentally, TXL allows more than rapid prototyping of the Turing language itself though. Here is an example of how one can override the Pascal grammar.

% Trivial coalesced addition dialect of Pascal
% Based on standard Pascal grammar
include "Pascal.Grm"
% Overrides to allow new statement forms
redefine statement
...
| [reference] += [expression]
end redefine
% Transform new forms to old
rule main
replace [statement]
V [reference] += E [expression]
by
V := V + (E)
end rule

The designers of TXL chose Lisp as the model for the underlying semantics, and uses functional programming with full backtracking for both parser and transformer.

Links

You can see slides from the Links meeting here and commentary and pictures here. (Thanks to Ethan Aubin for already starting a thread under the former, and to Ehud Lamm for inviting me to guest blog.)

Ethan Aubin writes:

So why do we need a new language? What cannot be accomplished with existing frameworks? There is a slide following this asking why can't you do this in Haskell or ML, but I don't know why they (or even java/php/etc) aren't enough.
Let me try to answer this. Links is aimed at doing certain specific things.
  • (a) Generate optimized SQL and XQuery directly from the source code -- you don't need to learn another language, or work out how to partition your program across the three tiers. This idea is stolen from Kleisli. You need to build it into the compiler, so Haskell, Ocaml, or PLT Scheme won't do.
  • (b) Generate Javascript to run in the browser directly from the source code -- you don't need to learn another language, or work out how to partition your program across the three tiers. We're hoping to provide a better way to write AJAX style programs. Again, you need compiler support -- vanilla Haskell, Ocaml, or PLT Scheme can't do this.
  • (c) Generate scalable web interfaces -- where scalable means all state is kept in the client (in hidden fields), not in the server. PLT Scheme and WASH provide the right interface, but they are not scalable in this sense, precisely because making them scalable involves fiddling with the compiler. (Felleisen and company have pointed out the right way to do this, applying the well-known CPS and defunctionalization transformations.)
So that's my argument for a new language.

Is it a good enough argument? Is this enough of an advantage to get folk to move from PHP, Perl, Python? Not clear. I suspect if it is good enough, a major motivating factor is not going to be anything deep, but simply the fact that being able to write everything down in one language instead of three or four will make people's brains hurt less.

Ethan Aubin also writes:

Wadler goes into the FP success stories, Kleisli, Xduce, PLT Scheme (Continuations on the Web), Erlang. If you take the befenits of these individually, you've got a language which solves the 3-tier problem better than what we have now, but I don't think it meet the criteria of "permitting its users to do something that cannot be done in any other way". So, I'd like to ask the all the perl/php/asp/pythonistas on LtU, what it is the killer-app that that your language cannot handle?
I'd love to see answers to this question!

Premonoidal categories and notions of computation

I am currently working through Premonoidal categories and notions of computation.
Eugenio Moggi, in (Moggi 1991), advocated the use of monads, equivalently Kleisli triples, to model what he called notions of computation.
[...]
Here, we reformulate his theory. We take the base category and the category providing the denotational semantics of the extended language as primitive, and add extra structure and conditions to those and the inclusion functor of the first into the second. This more flexible and somewhat more general framework allows us to model sequential composition of programs directly by sequential composition in our extended category, rather than by a sometimes complex construction involving a monad.
As my knowledge of monads and CT is very limited, it's pretty tough... Does anybody have any opinion on the value of this paper and importance of premonoidal categories to CS?

Thanks!

Links Slides

The speakers at the Links meeting at ETAPS have posted slides from their talks To me, Xavier Leroys slides seem especially interesting, but there is something for everyone. Transactions, XML, Concurrency, Types, Object-Orientation, etc.

JPred -- predicate dispatch for Java

JPred extends Java to include predicate dispatch.

This is quite cool -- predicate dispatch allows for the partial implementation of methods in abstract classes so that the base class can provide common argument type checking (nulls, negative values, etc.) as well as simplifying the maintenance of the jumbo if/case statements for dispatching based on the type of an object (e.g. event dispatch) or field values.

As a language design non-expert, predicate dispatch reminds of template matching in XPath, which I have found to be incredibly useful for handling special cases like "Chapter 1 shouldn't have a blank page in front of it" without messing up a working template for the more general case. I'd expect that JPred will provide a similar capability.

JPred is implemented using Polyglot, and compiles JPred code into vanilla Java. Todd Millstein, the creator, used JPred to re-implement a complex chunk of event handling code and eliminated several gaps in the original dispatch code.

I'm not able to appreciate the subtler parts of this work, I suspect, but the basic idea is simple enough that I could explain it to my 14 year-old son and he understood its value.

Why is erlang.org down

Does anybody know why the site is down?

Also, where can I find active lists/forums on Erlang?

Thanks,
Enrico

Lisp-Stat does not seem to be in good health lately.

The Journal of Statistical Software http://www.jstatsoft.org/ has a Special Volume devoted to the topic: "Lisp-Stat, Past, Present and Future".

In the world of statistics, it appears that XLISP-STAT http://www.stat.uiowa.edu/~luke/xls/xlsinfo/xlsinfo.html has lost out to the S family of languages: S / R / S-plus:

In fact, the S languages are not statistical per se; instead they provide an environment within which many classical and modern statistical techniques have been implemented.

An article giving an excellent overview of the special volume is: "The Health of Lisp-Stat" http://www.jstatsoft.org/v13/i10/v13i10.pdf

Some of the articles describe the declining user base of the language due to defections:

whilst other articles describe active projects using XLisp-Stat, often leveraging the power of the language, in particular for producing dynamic graphics.

The S family of languages, originally developed at Bell Labs, has much to recommend it. S is an expression language with functional and class features. However, as the original creator and main developer of XLisp-Stat, (and now R developer) Luke Tierney explains in "Some Notes on the Past and Future of Lisp-Stat" http://www.jstatsoft.org/v13/i09/v13i09.pdf ,

"While R and Lisp are internally very similar, in places where they differ the design choices of Lisp are in many cases superior."

XML feed