... or how is the title of this paper pronounced?
We make the notion of scope in the lambda-calculus explicit. To that end, the syntax of the lambda-calculus is extended with an end-of-scope operator [adbmal], matching the usual opening of a scope due to lambda. Accordingly, beta-reduction is extended to the set of scoped lambda-terms by performing minimal scope extrusion before performing replication as usual. We show confluence of the resulting scoped beta-reduction. Confluence of beta-reduction for the ordinary lambda-calculus is obtained as a corollary, by extruding scopes maximally before forgetting them altogether. Only in this final forgetful step, alpha-equivalence is needed. All our proofs have been verified in Coq.
While playing with my own lambda-machine (derivative of CEK in Java) I decided that I would like to control scope better - so I found this paper.

See also Lambdascope previously mentioned on LtU.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Ambdal indeed

... or how is the title of this paper pronounced?

You guessed it right, see
Dimitri's pages.

Side conditions or explicit "descoping"?

I wonder why this approach of explicit scope control is not used wider, as to me it is clearly more user-friendly than side conditions on freeness of variables (see page 13), which just complicate every other paper on calculi. Maybe the missing bibliography would prove that people do work in this direction.

Also, is it just me, or providing graphical examples improves user experience (at least for rewriting of lambda-expressions)? Well, I guess it depends on the background of the reader... Note that I don't want the entire paper to be composed of pictures, I know where to find Dilbert when I want to :)

Sounds like garbage-collection

This ambdal operator looks very much like "delete" (\daleth) in my pi-calculus with garbage-collection.

Thanks for the reference. Looks like hours of fun ahead playing with unbinders in different calculi.

For a quick reference

David is referring to Recovering resources in the pi-calculus.

Full disclosure: David wrote that paper and I linked it, so we are biased :)

Dynamic vs. static... oh, never mind

On the first glance I would say that daleth has a distinct dynamic flavor, while adbmal feels very static (lexical, so to say - now if Frank is reading this he will accuse me of making unscientific statements :) ).

It would be interesting to explore their relatedness, though. Can you share your findings, please?


While at that (unscientific statements) I would add that CINNI - A Calculus of Indexed Names and Named Indices and explicit substitutions might be relevant as well.

Discontinuos scope

I have yet to understand how deeply the analogy with adbmal goes, but Jaakko Hintikka and Gabriel Sandu stated a continuous scope of a quantifier as major mistake of Frege/Russell/Whitehead, and suggested to fix it in their independence-friendly (IF) logic. See A revolution in logic? for more details. I feel the paper explains to myself why I am subconciously gravitating towards game semantics recently.
[on edit: aha, was already discussed on LtU

I would argue that to me the notation in the paper was indeed intuitive, but perhaps because I already have read several papers on game semantics :) ]


Last time this paper was discussed, I must have been too busy to actually read it, or maybe too turned off by the generally pretty low opinions.

Having read it, I see why it was criticized, but frankly I find the adversarial style pretty entertaining, and clearly there actally is something to IF FO logic. I agree that claims that the notation is "intuitively obvious" are a bit of a stretch (and the incredibly ambiguous parenthesized notation is far, far worse). But if you're willing to read to the third page, an interesting thing happens in equations 7 and 8 with the interleaved quantifiers. Either of the independence annotations can be eliminated, but not both, and the interlocked pattern that prevents lifting in this case really reminds me of something, but I can't quite put my finger on it.

In any case, I'm really not equipped to judge the claim that IF logic is, in some sense, what we should have been using all along. But I think the idea that this exposes an underlying non-compositionality in first-order logic with respect to quantifier dependence warrants careful thought, and his analysis of the axiom of choice is interesting, if not thoroughly compelling.

But what really grabbed me in this paper is the comments about Goedel's incompleteness theorem and mathematical intuition. The authors appear to contradict themselves, first saying "[It] is not a ... limitation to what can be done by means of logic or mathematics. It is a limitation to what can be done by means of computers." This seems to tread a little close to mysticism for my taste, but within two paragraphs the authors embark on one of the most thorough debunkings of the idea of "mathematical intuition" that I've ever seen. Every attempt I've seen to use Goedel's proof to demonstrate an intrinsic human superiority over computers relies on just such a mystical notion of intuition, so I find these two points of view a little hard to reconcile. Maybe I should take a look at On Goedel...

I suppose some of the flaws in this paper might be inevitable when you try to cover this much ground in ten pages. Anyway, I'd like to learn more about IF logic, so in that sense I suppose the paper is a success.

Not a contradiction at all, I don't think

You elided the subject of that quote, which I think is key. What they say is:

[T]he inevitable incompleteness that is found in logic and in the foundations of mathematics is not a symptom of any intrinsic limitation to what can be done by means of logic or mathematics. It is a limitation to what can be done by means of computers.

I don't see this as saying anything in particular about mathematicians and I suspect that the authors would agree with your critique of pseudomystical misappropriations of Goedel's proof. As you point out, they go on to take a hatchet to the notion of mathematical intuition.

I see it rather as a counter to the equally cryptopopulist Douglas Hofstadter (and ilk), and sentiments such as this one from Hofstadter's profile of Goedel in Time magazine:

KURT GODEL He turned the lens of mathematics on itself and hit upon his famous "incompleteness theorem"--driving a stake through the heart of formalism. (quoted from here)

There is something very creepy about regarding formalism as akin to a vampire, and it seems worthwhile refuting such drivel, which Hintikka and Sandu do with panache:

Because we can formulate descriptively complete axiomatic theories, the task of a mathematical or scientific theorist can still be thought of as a derivation of logical consequences from the axioms. And such a derivation can still be considered as a purely logical proof. The requirement that characterizes its different steps is still the same as of old, viz. truth preservation. All the former rules concerning such truth preservation remain valid, and new ones can in principle always be discovered model-theoretically. Each legitimate step in a proof is still an instance of a purely formal, truth-preserving pattern of inference. (Emphasis added)

So formalism is still very much alive in Hintikka's view, and furthermore has no need to cower from exposure to sunlight.

Still not sure...

I agree with everything you say, and certainly we agree on the main thrust of Hintikka and Sandu's argument. But I'm still a bit confused...

Regardless of the subject of the original quote, they still imply a distinction between "what can be done by means of logic or mathematics" and "what can be done by means of computers." It is precisely this distinction that I'm concerned about. What does this distinction look like? If they're defining "what can be done by means of computers" as trivial recursive enumeration of theorems in a formal system, then yes, I'd say we've established a limitation on what can be done by means of that. But I think it's important to distinguish between that and what can, in principle, be done by means of computers. If they're suggesting a more fundamental formal limit to the power of computation, then I'm a little suspicious. In other words, precisely which aspects of "logic or mathematics" cannot be done by means of computers, and why not?

I guess I just find a fundamental distinction between "all of mathematics" and "what computers can do" a little bit problematic, and in the absence of any attempt to describe or justify that distinction, I find it tantamount to invoking mathematical intuition.

(I've heard some rather implausible arguments about quantum effects in the human brain, and while I don't find them too convincing, at least it's a real attempt to characterize and explain the allegedly "special" nature of cognition.)

No appeal to intuitionism

There are uncountably many real numbers. However, both the set of Turing machines and the set of English statements are countable. Let's say that a real number R is computable if there is a turing machine which given as input an integer k will terminate having output a number R' such that |R'-R|

Clearly, there are uncountably many real numbers which are neither computable nor describable. Every computable number is describable, by virtue of the fact that every Turing machine is describable (as a number, for example), so that I can unambiguously refer to "the number computed by the turing machine x". However, there are real numbers which are describable and not computable. One of them is the sum of 1/2^k for all k where k is the number of a Turing machine configuration which will halt.

So English is more descriptive than a Turing machine. That is simply a statement of what can be done by means of computers, no mysticism or intuitionism involved.

Exactly what do you mean by s

Exactly what do you mean by saying that the real numbers is uncountable but that turing machines or english statments are not?

That the first set is not enumerable and that the other two are

Any English statement can be expressed as a finite string of a finite alphabet. Hence it is enumerable. A similar result holds for Turing machines, as Goedel points out.

Kantor's diagonalization demonstrates that the set of real numbers is not enumerable.

mm, yeah I seem to be confusi

mm, yeah I seem to be confusing real numbers with decimal numbers.

(And btw that written english statments, spoken english statments might not be enumerable (depending on the exact nature of some quantum effects))

understanding computers

more descriptive than a Turing machine. That is simply a statement of what can be done by means of computers

The given definition of computable real numbers is a fine one. However, it places the computer in an extremely limited role. The output of a given Turing machine is allowed meaning only as long as it is a nicely convergent sequence of real numbers.

In practice, computers communicate with humans using both formal logic and English.
When Hintikka and Sandu say,

It is a limitation to what can be done by means of computers.

they must be using "computers" in the limited sense (including what Matt calls "trivial recursive enumeration of theorems").
That lack of clarity is overshadowed by their defense of formalism, which is right on target.

the limited sense was deliberate

I wasn't trying to make any sort of profound statement with my example; simply trying to find a meaning for "a limitation to what can be done by means of computers" which did not lead to an assertion of intuitionism. (viz. Anatol Rapaport's advice in "Fights, Games and Debates": when faced with a disagreement, start by trying to find a definition of terms which allows you to agree; if you can do so, it reduces the argument to a question of definition.)

I suspect that Hintikka and Sandu were not aiming at profundity in the original quote either; I read them as saying that the fact that a "trivial recursive enumeration of theorems" will not generate every true statement does not imply the death of formalism, by means of silver stakes or otherwise.

I think we all agree, anyway (modulo some redefinition of terms :)

In my lambda experiment I dem

In my lambda experiment I demonstrate that the rule and variable system of the language machine in effect contains the lambda calculus.

In this context it is worth noting that the variable scope rules of the language machine were in effect used directly to obtain the effects of beta-substitution. The treatment of currying is described here (adapted from an earlier posting).

The variable scope rules of the language machine relate reference scope directly to the left- and right- side nesting structure of rule applications as shown in the lm-diagram - in effect to the 'ghost of the parse tree'.

Curiously the simplicity and efficiency of the whole system of variable binding and reference in the language machine arose directly from the requirement that variable bindings be created going forward with minimal cost, and undone (if necessary) at no cost (except for garbage collection) in case of a backtrack - which is sometimes required for recovery from partial ambiguity.

It turned out that the trick for efficiency in creating and destroying variable bindings and for efficient reference was to stand in the twigs and look back.

The point is that a completely direct and natural approach to implementing unrestricted recognition grammars augmented by variable bindings produces a system for variable scope that almost implements itself - it falls like manna from heaven.