Foundations of Inference

Foundations of Inference, Kevin H. Knuth, John Skilling, arXiv:1008.4831v1 [math.PR]

We present a foundation for inference that unites and significantly extends the approaches of Kolmogorov and Cox. Our approach is based on quantifying finite lattices of logical statements in a way that satisfies general lattice symmetries. With other applications in mind, our derivations assume minimal symmetries, relying on neither complementarity nor continuity or differentiability. Each relevant symmetry corresponds to an axiom of quantification, and these axioms are used to derive a unique set of rules governing quantification of the lattice. These rules form the familiar probability calculus. We also derive a unique quantification of divergence and information. Taken together these results form a simple and clear foundation for the quantification of inference.

For those of us who find ourselves compelled by the view of probability as a generalization of logic that is isomorphic to (algorithmic, as if there were any other kind) information theory, here is some recent i-dotting and t-crossing. The connection to Curry-Howard or, if you prefer, Krivine's classical realizability is something I hope to explore in the near future.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


What are generalizations of existential and universal quantifiers in their system?

Not sure...

...I know that universal and existential quantification have an interpretation under which this work could be said to generalize them, but then, they have an interpretation under which (Laplacian, Jeffreysian, Coxian, Jaynesian) probability already generalizes them. I can't quite articulate it just now other than to wave my hands a lot and say that they "reduce" in some formal sense to implication, and this work, like Laplace/Jeffries/Cox/Jaynes before it, provides a "calculus of degrees of implication," if you will.

Algorithmic = discrete?

Wouldn't "algorithmic" here refer to the (finite) discrete approach? Those of us that still worked with analogue computers will be aware of the gap - analogous to the gap between finite, discrete probability theory and the continuous variety with its normal distribution and the like.

Of course modern discrete spectacles make many people deny the infinite amount of information in many measurements (which is not diminished by error estimations). One would need a form of measure theory to make useful statements about such information.


Knuth and Skilling follow Jaynes in rejecting any aspect of measure theory that can't be derived from taking a finite set to a limit. There are simply too many cases in which taking a completed infinity as a reality of itself leads to unsoundness to operate any other way. I would call "the infinite amount of information in many measurements" exactly such a nonsensical concept: there isn't an infinite amount of information in the universe.


Ah, but there is! Chaitin's Omega already contains an infinite amount of information. But of course, through finitely-discrete spectacles the world is finite and discrete. Arguing for the reality of the infinite world despite one's possibly finite knowledge of it is like arguing for the reality of the third dimension, despite all our photographs of the world being two-dimensional.

We Want Information! Information! INFORMATION!

Biep: Ah, but there is! Chaitin's Omega already contains an infinite amount of information.

Magically defining the canonical "infinitely" random number as containing an "infinite" amount of information is precisely the sort of semantic legerdemain that Constructivism rightly rejects.

Biep: But of course, through finitely-discrete spectacles the world is finite and discrete.

The hitch is that the "finitely-discrete spectacles" happen to be physics. The fact that you can fantasize about "completed infinities" doesn't make them any less fantasy. Show me an infinity in a laboratory experiment. Then I'll believe that "infinity" isn't just a convenient shorthand for "the result of some limit process."

Biep: Arguing for the reality of the infinite world despite one's possibly finite knowledge of it is like arguing for the reality of the third dimension, despite all our photographs of the world being two-dimensional.

Except, crucially, one can conduct experiments to support the claim that our physical universe has three dimensions. The same cannot be said for any claim that there are such things as completed infinities.

The fact that you can

The fact that you can fantasize about "completed infinities" doesn't make them any less fantasy.

Not so fast. "The modern spectacles" Biep refers to are not that of physics but that of set theory. We are used to approximate continua from point sets and this is quite problematic for various reasons:

* one can introduce non-standard numbers e.g. as equivalence classes of sequences of real numbers as in the Robinson construction and repeat the limit building processes we are used to from constructing reals from sequences of rationals. Representing 1D continua as sets of real numbers only serves as an approximation.

* although there may not be enough points there are also too many sets as demonstrated by the Banach-Tarski paradox which lead to the introduction of measurable sets.

So we are dealing with a couple of pragmatic compromises.

A fun analogy: the compromises are basically accepted by "working mathematicians" ( programmers ) but not so by those who work on foundations and believe that they are all basically misguided and their work may ultimately vanish in the dustbin of history ( language designers ).

Where's the humble programmer?

I'm getting a bit lost. What "compromise" are you talking about that which working mathematician programmer accepts? I'm only seeing a philosophical discussion between foundationists here.

Foundations of mathematics

Foundations of mathematics are actually about its language: what is your vocabulary, what are you allowed to talk about, what may be concluded? This is also the reason why the debates are lead mainly by mathematicians.

The compromise is one by which we accept to place a few snakes and frogs and a few lamb eating lions into Hilbert's paradise. We speak the language of God which is surprisingly straightforward but a few of its words are too powerful and suspected to cause madness. Therefore it needs to be tamed and tweaked at a few places.

Point Set Topology

dead for a century this year

Poincare has been a dead a century this year. So well.... he was wrong.

I would love to know the context for that quote!

100 years isn't "soon" in the history of mathematics

However, there is some recent progress.

For those who are wondering how all of this relates, I recommend reading the originally-linked paper, then reading about "speculations on lattice theories" in Probability Theory: The Logic of Science. See also HANSEI.

Poincaré quote

He apparently did not say it. I'm assuming that this is a variant translation of Hölder's comment which is usually rendered as “Poincaré at the Rome Congress (1908) went so far as to say ‘Later generations will regard the Mengenlehre as a disease from which one has recovered.’”

See Jeremy Gray, “Did Poincaré say ‘Set theory is a disease’?”, The Mathematical Intelligencer, Volume 13, Number 1, 19-22, DOI: 10.1007/BF03024067.

English translation of Poincaré's address here (pdf).

When you take a look at some

When you take a look at some of Poincaré's articles ( in English translation ) you'll find that he used set theoretical terminology as well. So it needs to be asked what he means by "Cantorisms" actually.

Keep in mind that the axiomatization of set theory was still in progress in his own life time and came to a halt only around 1930 when ZFC took the form it has today including the foundation axiom which tames it with no impact on whatever Poincaré did. When one looks a little closer on the quoted talk of him alone one doesn't find any indication that he had reservations against an "infinity of infinitesimal transformations" or "transcendental functions". So he was certainly not the sort of ideologue that could be reclaimed for certain constructivist or intuitionist programs that ended up rejecting even the mean value theorem of elementary analysis.


Quite right. I personally am a fan of Aczel's Constructive Zermelo-Fränkel set theory.

Again, my only point in all of this is that a great way to prevent logical inconsistencies is to insist that your logic map to definite physical processes. When your logic violates the laws of physics, as, e.g. the Banach-Tarski "paradox" rather obviously does, you throw away the logic, not physics.


But even then - a physical measurement reduces to a fraction on a scale - a length measurement. There, too, one needs glasses to state that this fraction is in Q - unless one has proof that space is discrete. Measuring the measurement itself is not going to help, obviously.

And any drawing of conclusions is logic, not physics. Without a reliable logical basis one cannot even begin to do physics, so using the latter to judge the former is a dangerous path, which requires reliable logical - not physical - guidelines.

(And coming back to Chaitin - Omega is a probability, which is a physical concept, depending on empirical laws of large numbers. What physical impossibility would be involved there?)

Sampling uniformly from the

Sampling uniformly from the infinite space of programs is the physical impossibility, isn't it?

Edit: Perhaps more glaringly, so is detecting halting.

I'm not up to date on the

I'm not up to date on the state of the art -- up to what number have theoretical physicists verified the commutativity of addition in the lab?

Physical proofs and refutations

Perhaps the various applications of quaternions in physics have provided an empirical refutation of the commutativity of arithmetic?


There is certainly a limit to commutativity which relies on the idealist ( and counter-revolutionary ) assumption that terms can actually be commuted. This either requires an ever growing amount of time which no one has - we are all very busy! - or we accelerate the commutation and build a commuter which finally becomes so big and massive that it forms a black hole which leads to a breakdown of the commutation as well.