archives

The Experimental Effectiveness of Mathematical Proof

The Experimental Effectiveness of Mathematical Proof

The aim of this paper is twofold. First, it is an attempt to give an answer to the famous essay of Eugene Wigner about the unreasonable effectiveness of mathematics in the natural sciences [25]. We will argue that mathematics are not only reasonably effective, but that they are also objectively effective in a sense that can be given a precise meaning. For that—and this is the second aim of this paper—we shall reconsider some aspects of Popper’s epistemology [23] in the light of recent advances of proof theory [8, 20], in order to clarify the interaction between pure mathematical reasoning (in the sense of a formal system) and the use of empirical hypotheses (in the sense of the natural sciences).

The technical contribution of this paper is the proof-theoretic analysis of the problem (already evoked in [23]) of the experimental modus tollens, that deals with the combination of a formal proof of the implication U ⇒ V with an experimental falsification of V to get an experimental falsification of U in the case where the formulæ U and V express empirical theories in a sense close to Popper’s. We propose a practical solution to this problem based on Krivine’s theory of classical realizability [20], and describe a simple procedure to extract from a formal proof of U ⇒ V (formalized in classical second-order arithmetic) and a falsifying instance of V a computer program that performs a finite sequence of tests on the empirical theory U until it finds (in finite time) a falsifying instance of U.

I thought I had already posted this, but apparently not.

Consider this paper the main gauntlet thrown down to those who insist that mathematical logic, the Curry-Howard Isomorphism, etc. might be fine for "algorithmic code" (as if there were any other kind) but is somehow inapplicable the moment a system interacts with the "real" or "outside" world (as if software weren't real).

Update: the author is Alexandre Miquel, and the citation is "Chapitre du livre Anachronismes logiques, à paraître dans la collection Logique, Langage, Sciences, Philosophie, aux Publications de la Sorbonne. Éd.: Myriam Quatrini et Samuel Tronçon, 2010."

Foundations of Inference

Foundations of Inference, Kevin H. Knuth, John Skilling, arXiv:1008.4831v1 [math.PR]

We present a foundation for inference that unites and significantly extends the approaches of Kolmogorov and Cox. Our approach is based on quantifying finite lattices of logical statements in a way that satisfies general lattice symmetries. With other applications in mind, our derivations assume minimal symmetries, relying on neither complementarity nor continuity or differentiability. Each relevant symmetry corresponds to an axiom of quantification, and these axioms are used to derive a unique set of rules governing quantification of the lattice. These rules form the familiar probability calculus. We also derive a unique quantification of divergence and information. Taken together these results form a simple and clear foundation for the quantification of inference.

For those of us who find ourselves compelled by the view of probability as a generalization of logic that is isomorphic to (algorithmic, as if there were any other kind) information theory, here is some recent i-dotting and t-crossing. The connection to Curry-Howard or, if you prefer, Krivine's classical realizability is something I hope to explore in the near future.

Extensible Programming with First-Class Cases

Extensible Programming with First-Class Cases, by Matthias Blume, Umut A. Acar, and Wonseok Chae:

We present language mechanisms for polymorphic, extensible records and their exact dual, polymorphic sums with extensible first-class cases. These features make it possible to easily extend existing code with new cases. In fact, such extensions do not require any changes to code that adheres to a particular programming style. Using that style, individual extensions can be written independently and later be composed to form larger components. These language mechanisms provide a solution to the expression problem.

We study the proposed mechanisms in the context of an implicitly typed, purely functional language PolyR. We give a type system for the language and provide rules for a 2-phase transformation: first into an explicitly typed λ-calculus with record polymorphism, and finally to efficient index-passing code. The first phase eliminates sums and cases by taking advantage of the duality with records.

We implement a version of PolyR extended with imperative features and pattern matching—we call this language MLPolyR. Programs in MLPolyR require no type annotations—the implementation employs a reconstruction algorithm to infer all types. The compiler generates machine code (currently for PowerPC) and optimizes the representation of sums by eliminating closures generated by the dual construction.

This is an elegant solution to the expression problem for languages with pattern matching. This paper was posted twice in LtU comments, but it definitely deserves its own story. Previous solutions to the exression problem are rather more involved, like Garrigue's use of recursion and polymorphic variants, because they lack support for extensible records which makes this solution so elegant.

Extensible records and first-class cases unify object-oriented and functional paradigms on a deeper level, since they enable first-class messages to be directly encoded. Add a sensible system for dynamics, and I argue you have most of the power people claim of dynamic languages without sacrificing the safety of static typing.