Programming in Lambda Calculus

Many texts on untyped lambda calculus are heavily loaded with math.

I have tried a different approach to look at lambda calculus from a programming point of view. The results are posted in a little paper on this website.

As a special add-on, I show how to systematically construct algebraic datatypes in lambda calculus.

Comments, hints etc. are welcome.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

History

This sounds like a fun topic, and I look forward to reading it properly (just haven't had time yet). So I'm nit-picking, I s'pose; I wouldn't exactly have said Church created the lambda calculus for the purpose of exploring the limits of computability and decidability. I fear I've got too many details rattling around in my head, putting me at a disadvantage; I don't know what to say briefly.

Fwiw. Church created a formal logic, which he published in 1931, in line with Hilbert's proposal to use meta-mathematics to study formal systems in order to understand better what was going wrong with the foundations of mathematics at the time. Hilbert's technique was (of course) *hugely* successful, in that Godel essentially used it to show it's not possible to do the particular thing Hilbert was hoping for (to use a smaller formal logic to prove consistency of a larger superset of itself). Church had, afaict, two main purposes with his 1931 logic. His declared purpose had to do with the basic axioms that were causing trouble at the time: General sentiment was that the problem was coming from the Law of the Excluded Middle (basically: if a proposition is provably not false, then it's true), but Hilbert felt the Law of the Excluded Middle is just too useful to give up without a fight, hence his desire to explore what was going wrong and understand it better. If you carefully examine the proof of Russell's Paradox, though, it also depends very heavily on another axiom, reductio ad absurdum (proof by contradiction). So Church was, officially, exploring what would happen if, instead of dropping the Law of the Excluded Middle, you only tweak it a little, and mainly weaken reductio ad absurdum. I reckon he was also doing something else: when you have a chance to build a system like that, officially for one purpose, you're likely to take advantage of the opportunity to try out whatever other things you've had in mind. (I did that when I was constructing vau calculi for my dissertation, which either makes me more able to recognize the syndrome, or more likely to read it into what Church was doing.) Church's particular obsession seems to have been variable binding. In his 1931 paper, he spent just a few words mentioning reductio ad absurdum, ostensibly the whole point of the exercise, before launching into a massive rant about how terrible unbound variables are. He had this lovely idea, to have only one way to bind variables, that would therefore define what a variable must mean; his one way to bind variables was called lambda (for reasons which are lost to history), and he was able to make universal and existential quantification higher-order operators, effectively allowing them to piggy-back off the variable binding of lambda. In true Hilbertian fashion, though, when you studied a formal system, it had /no meaning/ during your study; so Church remarked in his 1931 paper that if his formal system didn't turn out to be useful as a logic, maybe it would be useful as something other than a logic. In fact, a paradox/antinomy was pretty quickly found in his logic, and by 1935 they'd decided to try to back off far enough to establish that a smaller subset of his logic was consistent. The subset they chose simply omitted all the operators except lambda; the usual generalized notion of consistency, for formal systems that didn't even have a negation operator, was that not all terms are equal to each other, so Church and his student J. Barkley Rosser proved that each term in the system (Church's logic minus all non-lambda operators) is only reducible to at most one irreducible term, a property which implies consistency. Their theorem is now called the Church-Rosser theorem.

Thanks for the hint. Maybe I

Thanks for the hint. Maybe I was a little bit sloppy to say that Church's purpose for inventing the lambda calculus had been to explore the limits of computability.

As you mentioned, Church did a lot of work in formal logic and not only in computability.

But I assume that we agree that the major achievement of Church's lambda calculus had been to demonstrate the existence of noncomputable and undecidable problems. Church's lambda calculus and Turing's automatic machine addressed the "Entscheidungsproblem" nearly at the same time (Church a little bit before) by very different means. And they both proved that Turing machines and lambda calculus are equal in expressive power.

The literature on computability and decidability focuses much more on Turing machines than on lambda calculus. That's a pity, because in my opinion lambda calculus is much more beautiful than Turing machines.

Elegance

As I understand, he started in logic (I love this reminiscence by a former student) and, yes, did certainly turn very much toward computation. My take these days on lambda-calculus and Turing machines: Of the three main models of computation of the 1930s, proving general recursive functions equi-powerful with lambda-calculus left Church convinced of the full computational generality of both, but didn't really convince Gödel; whereas proving them equi-powerful with Turing machines convinced Gödel. I find this deeply insightful about Turing machines. They're nuts-and-bolts; you can see what they're doing, and believe in it. Since about 1960 (if not earlier) people had been wanting to use lambda-calculus to define the semantics of programming languages, and when Plotkin finally cracked that problem in the mid-seventies, he did it by setting up correspondence theorems between a lambda-style calculus — because that's got the mathematical elegance you want for reasoning about the semantics — and a deterministic operational semantics — ugly mathematically but, like the Turing machine, manifestly describing the very thing whose semantics you want to study.