archives

Elegant Method of binding a function variable

I'm trying to bring myself up to speed on LISP/Scheme and am having one of those days when my brain cell has migrated to another dimension.

I have the following code(given in Scheme):

(begin
(define x 5)
(define (incx n) (+ x n))
(print (incx 2))
(define x 4)
(print (incx 2))
)

The first print will return 7, the second 6.

What I want is for the x in incx to be permanently bound to the first definition (5), so that I get 7 in both calls to incs.

I can do it by assigning x to another name (_x, say) and using that, but I would like a more elegant method of doing it - particularly one that doesn't involve creating an alternative name.

All ideas gratefully received.

Stuart

Jakarta Commons Monad, er, Chain

From Jakarta Commons Chain:
A popular technique for organizing the execution of complex processing flows is the "Chain of Responsibility" pattern, as described (among many other places) in the classic "Gang of Four" design patterns book. Although the fundamental API contracts required to implement this design patten are extremely simple, it is useful to have a base API that facilitates using the pattern, and (more importantly) encouraging composition of command implementations from multiple diverse sources. Towards that end, the Chain API models a computation as a series of "commands" that can be combined into a "chain".
Is it just me, or the Java world has discovered (an ad-hoc, informally-specified bug-ridden slow implementation of) monads?

On a serious side, it would be interesting to hear opinions of the developers.

a + b * c in Brian Meek's "The static semantics file"

Although my previous post, translational vs. denotational semantics, failed to stimulate any discussion, hopefully this one will.

Consider the following excerpt from Brian Meek's well-written The static semantics file.

[...]
term = factor | term, multiplying op, factor;
and
term = factor, [multiplying op, factor];
apparently mean the same (square brackets = zero or more repetitions). However, the recursive version better expresses the associativity from left to right. [...]

[...] The order of evaluation of an expression like a + b * c is undoubtedly semantic (linked to code generation, see later) but the use of recursion above only hints at the semantics (to use a phrase from a later electronic correspondent); it is descriptive, not prescriptive. The difference is that the expression can be generated by either rule (or by one implying right to left associativity). [...] The semantics gets put in at code generation and that can be done from any of the equivalent definitions. [...] [I]t appears to me to be the nearest approach among the examples put forward to anything which could be termed "static semantics"; it is perhaps a borderline case.

This seems a bit "off" to me. First of all, how "a + b * c" is parsed is a question of operator precedence, not operator associativity. Second, and more importantly, it seems to me that parsing is clearly a syntactic issue unless you view the semantics of a language as being defined with respect to its concrete rather than abstract syntax.

Such a view can be reconciled with the more traditional "semantics is defined with respect to abstract syntax" view in the following manner. Let language L's "broad" semantic function be defined as the composition of L's parser with L's "narrow" semantic function. Another (perhaps weirder) way to look at it is to view the parser as defining a language in its own right, with the abstract syntactic domain of L being the parser's semantic domain. To put this all in Haskell-ish notation,

parser :: ConSynDom -> AbsSynDom

narrow :: AbsSynDom -> SemDom

broad :: ConSynDom -> SemDom

broad = narrow . parser

Another way of looking at this is that syntax can of course be viewed as merely discriminating between valid and invalid sentences, or it can be viewed as including the parsing of sentences if they are valid. The former, "boolean" role for syntax pushes parsing issues somewhere else, perhaps into pragmatics, perhaps into semantics, or, perhaps into some ill-defined area called "static semantics." This last option is of course what Meek is writing about.

So, if all we're looking for from a grammar is whether it can generate a sentence or not (the "boolean" role) then, as Meek mentions, it need not deal with ambiguities arising from issues like operator precedence and associativity. Maybe this "boolean" role is what Meek calls a "descriptive" role. The grammar merely describes what valid sentences look like. But I'm having problems relating how a "prescriptive" role includes parsing. Would this mean that an unambiguous grammar, i.e. one capable of parsing, is "prescriptive" in the sense that it tells you where you "should" have put parentheses? Seems like a stretch.

Anyway, my final take is this: I think I agree with Meek in the big picture. Different languages draw the syntax/semantics line in different places, but it is always there, and there is nothing useful in between called "static semantics." One of the ways in which you can move the line so that the semantics is bigger is to define the syntax ambiguously with regard to operator precedence or associativity. Then these issues must be defined by the semantics. In the extreme, you could define the syntax of a traditional textual language merely by the character set its programs are expected to be in, leaving everything else to semantics. But in practice this is not a useful way to define a language.