archives

Church’s Thesis and Functional Programming

David Turner gives a condensed summary of the lambda calculus and functional programming in this paper on Church’s Thesis and Functional Programming.

The lambda-calculus, which Church developed during the period of convergence from which the Thesis emerged, has influenced almost every aspect of the development of programming and programming languages. It is the basis of functional programming, which after a long infancy is entering adulthood as a practical alternative to traditional ad-hoc imperative programming languages. Many important ideas in mainstream programming languages—recursion, procedures as parameters, linked lists and trees, garbage collectors — came by cross fertilization from functional programming. Moreover the main schools of both operational and denotational semantics are lambda-calculus based and amount to using functional programming to explain other programming systems.

The original project from whose wreckage by paradox lambda-calculus survived, to unify logic with an account of computable functions, appears to have been reborn in unexpected form, via the propositions-as-types paradigm.

Many of the PLT topics mentioned on LtU are covered - PLT in 20 pages or less. I'm still hoping for someone to publish a pop-PLT book that takes something like these 20 pages and turns them into a 800 page novel. In the meantime, this is a nice roadmap for PLT that helps provide the connections between such things as Curry-Howard, Coq, System F. (About as close to a cheatsheet for LtU that I've come across).

Light Logics and Optimal Reduction

Not sure this isn't a tad too technical for the front page, but posting has been light, so...

Baillot, Coppola & Del Lago have an arXived preprint, "Light Logics and Optimal Reduction: Completeness and Complexity:

Typing of lambda-terms in Elementary and Light Affine Logic (EAL, LAL, resp.) has been studied for two different reasons: on the one hand the evaluation of typed terms using LAL (EAL, resp.) proof-nets admits a guaranteed polynomial (elementary, resp.) bound; on the other hand these terms can also be evaluated by optimal reduction using the abstract version of Lamping's algorithm. The first reduction is global while the second one is local and asynchronous. We prove that for LAL (EAL, resp.) typed terms, Lamping's abstract algorithm also admits a polynomial (elementary, resp.) bound. We also show its soundness and completeness (for EAL and LAL with type fixpoints), by using a simple geometry of interaction model (context semantics).

We've 'known' that Lamping's algorithm implements Levy-optimality since Lamping 'proved' it in 1989, but unfortunately his argument has holes, and it has proven hard to fix them, this despite there being a fairly accessible geometric intuition supporting soundness & completeness. Harry Mairson has done worthy missionary work advertising the importance of this gap and working towards filling it. I haven't yet worked my way through this proof, but it looks like it belongs to a very attractive class of argument; namely those that attempt to forge a Curry-Howard-like correspondence between Lamping graphs and proof nets of weakened linear logics. Recommended reading for LtUers who enjoy a challenge!

Why is there not a PL with a mathematical type system?

Hello dear LTU community,

I want to know why there is no programming language which has a type system which is modelled after or "simulating" the mathematical world?

Let me shortly explain what I mean with "mathematical type system":
A type system which has a very basic type "set", basing on this type the more complex type "relation", and basing on this type then the type "function".
Then this type system would have - instead of "integer"s or "real"s - the types "natural number", "irrational number" and "complex number".
This type system would be made complete by the operations on this types:
"set" would have: union, intersection, relative complement and also subset and power set (and some more),
"relation" would have: the operations from the relational algebra (and some more),
"function" would have: "composition", "preimage" and "inverse" (and some more)
the types of the numbers would get the algebraic operations on them.
(Maybe it is presumptuous but I think with this type system you would have some 80-90% of mathematics "captured" in your PL.)

As anyone who looks at the scientific world I, too, see that the world is modelled using mathematics. Natural scientists are using it, Engineers using mathematics and for example there is also mathematical finance. But we programmers don't use mathematics. We are translating mathematical objects like "natural numbers" to "integers", and we don't have a type named "function" as it is defined in mathematics. Our functions don't have derivatives. The programmers don't have set theory under their fingers and also nothing like analysis.

I can imagine some advantages if we would have such a type system:
a) every one on this world would learn programming by learning mathematics
b) every one who can come up with a math. model for his field, his needs (and so on) could just make a program
c) I think it would be way more easier to make it possible where the programmer "draws" a model and the computer calculates the "source".

So, but we don't have such a type system. But there are intelligent and experienced PL developers who gave us
all the widely known PLs (C/C++, Haskell, Assembler and so on).
And this is the reason for my question: What is the reason that no one has come up with this kind of
type system? What is it that I overlook, what makes it impossible to build such a type system in a PL?

regards
Gueven

Modular Verification of Assembly Code with Stack-Based Control Abstractions

Modular Verification of Assembly Code with Stack-Based Control Abstractions

by Xinyu Feng, Zhong Shao, Alexander Vaynberg, Sen Xiang, Zhaozhong Ni

Published in Proc. 2006 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI'06), Ottawa, Canada, pages 401-414, June 2006. ©2006 ACM.

Abstract

Runtime stacks are critical components of any modern software---they are used to implement powerful control structures such as function call/return, stack cutting and unwinding, coroutines, and thread context switch. Stack operations, however, are very hard to reason about: there are no known formal specifications for certifying C-style setjmp/longjmp, stack cutting and unwinding, or weak continuations (in C--). In many proof-carrying code (PCC) systems, return code pointers and exception handlers are treated as general first-class functions (as in continuation-passing style) even though both should have more limited scopes.

In this paper we show that stack-based control abstractions follow a much simpler pattern than general first-class code pointers. We present a simple but flexible Hoare-style framework for modular verification of assembly code with all kinds of stack-based control abstractions, including function call/return, tail call, setjmp/longjmp, weak continuation, stack cutting, stack unwinding, multi-return function call, coroutines, and thread context switch. Instead of presenting a specific logic for each control structure, we develop all reasoning systems as instances of a generic framework. This allows program modules and their proofs developed in different PCC systems to be linked together. Our system is fully mechanized. We give the complete soundness proof and a full verification of several examples in the Coq proof assistant.

Of interest to those interested in verification, and the formal theory of stacks. Anyone else interested in seeing a new category created for assembly and intermediate languages?