Gödel, Nagel, minds and machines

Solomon Feferman. Gödel, Nagel, minds and machines. Ernest Nagel Lecture, Columbia University, Sept. 27, 2007.

Just fifty years ago, Ernest Nagel and Kurt Goedel became involved in a serious imbroglio about the possible inclusion of Goedel’s original work on incompleteness in the book, Goedel’s Proof, then being written by Nagel with James R. Newman. What led to the conflict were some unprecedented demands that Goedel made over the use of his material and his involvement in the contents of the book - demands that resulted in an explosive reaction on Nagel’s part. In the end the proposal came to naught. But the story is of interest because of what was basically at issue, namely their provocative related but contrasting views on the possible significance of Goedel’s theorems for minds vs. machines in the development of mathematics.

This is not directly PLT related, and more philosophical than what we usually discuss on LtU, but I think it will be of interest to some members of the community.

While the historical details are interesting, I am not sure I agree with the analysis. It would be interesting to here what others make of this.

To make this item slightly more relevant to LtU, let me point out that both the LC and category theory are mentioned (although they are really discussed only in the references).

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Coinduction

I wonder what would happen to Godel's theorem if the Co-logic proof method were used instead of analyses. As pointed out in the slides, coinduction deals with cycles and self-reference. These are the problems that lead to incompleteness.

Coinduction isn't magical.

Coinduction isn't magical. There are non-productive programs.

OK. So what is a

OK. So what is a non-productive program?

One that keeps reading inputs...

...never producing an output. Eg. a function to add all of the elements in an infinite stream.

No magic mirror

Not only that, but the halting problem has its dual.

where is my mind

People were just being confused in their words. The paper quotes Nagel, for example (and, no the context doesn't matter) using this noun phrase:


The discovery that there are formally indemonstrable arithmetic truths [....]

No such discovery has ever occured. None could:

The intuitionists put it more clearly: mathematical truth is that which can be formally demonstrated by construction. By definition, the set of formally indemonstrable arithmetic truths is empty.

There was no a priori mathematical reason to believe that consistency and completeness could be internally proved and Goedel succeeded in showing how it is actually an incoherent hope that both proofs could be given in a suitably complicated system -- like asking for a circular square or a bright total-darkness. The mathematical truth or falseness of Goedel's constructions isn't a real thing we can't compute; rather, neither the turth or the falseness of Goedel's constructed sentences actually exists.

It reminds me of primes: A much earlier Hilbert might have learned of the first 1000 primes and wondered "Well, perhaps there are a few more left to find but, perhaps then we can make any integer! So, my challenge is to find any remaining primes and then prove they've all been accounted for." And then along comes Erasthothsenes with a mechanical technique: give him any finite list of primes you've got and he'll discover a new one you missed. And he can keep that up, all day. Does that mean we've hit an existential crisis we're we realize that the finite human mind can't possibly contain this untamable, unending progression of primes? It seems a silly question until you remember examples like people treating Pythagoros like some kind of Oracle of the Temple.

And, this is no different. In the 19th century, "order" and "systemization" were really in a growth phase across the culture, including in maths. Publishing was reaching industrial scales. Popular academic persuits, across many fields, were making taxonomies, and reference table, and new foundational calculi for the various fields. Math turned on itself in the same way: almost a competition to see who could publish the ultimate reference encyclopedia for the ages -- the unity of math was coming to be understood and understood as a byproduct of the formal ways in which good math was conducted. The illusion of a self-contained text that would sum up all math, as well as prove it had done so soundly, expressed that late-englightenment exploration of order.

Turned out to be an illusion. The concept of such an encyclopedic summary of all math is inherently incoherent. So what? That doesn't prove that there's math we can't know. That only proves we had a mistaken idea of what math is.

But, where is my mind? Is it strictly in my neurons? Or does it extend to the post-its on my monitor? Books on my shelves? Does it include the agenda posted down the street, that I haven't yet seen, for the next City Council meeting? Does any of my mind reside in my partner's brain? My mind certainly *feels* different in each of these contexts. For that reason, I think you can't ask the question "What does Goedel's theorem tell us about mind?" without almost immediately having to ask "What does Goedel's theorem tell us about the universe as a whole?" And then we get into cosmological questions... well beyond the scope of maths.

-t

Hail, fellow traveller

The intuitionists put it more clearly: mathematical truth is that which can be formally demonstrated by construction. By definition, the set of formally indemonstrable arithmetic truths is empty.

While I'm in strong philosophical agreement with this position, I've learned the hard way recently that pure mathematicians don't as a group seem to have much sympathy for this position, and I can't say I blame them really, since it has a LOT of unpleasant consequences for huge swathes of pure math.

So instead of being a pathological contrarian, my new personal practice in this regard is to make a strict distinction in my own mind between "mathematics" or "pure mathematics" on the one hand and "computationalism" or "computational mathematics" on the other. Though they share a common heritage, use similar methods and notations, and may inspire one another, the upper bound on possible mathematical objects is MUCH higher for pure mathematics than the computational variety.

As a consequence I would say that Gödel, as a pure mathematician, is perfectly welcome to imagine truths that are "really true" in the absence any finite proof with a finite construction, and I'll gladly humour him if I want to understand pure math, but I have to simply know that, in order to see what (if any) meaning this has for computationalism, the onus is on me to translate it into the much more strictly circumscribed world of the computational.

(A similar, but reversed, thing holds true in the mapping from math to the even broader world of philosophy)

The practical upshot is that while a Roger Penrose (or a Gödel) is free to believe that the mind partakes of the infinite, the computationalist is strictly required to not believe such things (except perhaps on his Sabbath day, should he observe one).

The original paper also makes a good point: there is no evidence whatsoever that the human mind is consistent. In fact, there is a much better case for the contrary. ;-)

hehe... the least fixed concept enters, stage right

My sense is that the intuitionist v. classical debate is more or less over and the intuitionists won -- but a lot of people still have the conceit that they can safely do math using non-intuitionist methods without producing anything that couldn't be nicely reduced to a construction. In other words, I guess some practitioners consider the difference to not matter much in practice, even if they concede that root -- it's intuitionistic. (In a lot of subspecialities, that's a reasonable conceit.) At the end of the day, it's all constructivist at heart, though.

A "pure mathematics" person certainly might assert the truth of some sentence by applying the rule of the excluded middle but, it's helpful to think of that as a pseudo-axiom. It's really an axiom schema and every use of it, in effect, adds a new axiom to the system. So there should be a footnote to every proof that uses it, and to every proof that uses those proofs. The footnote should say "The new axiom introduced here might make the system inconsistent." (Of course, in many subspecialties we can find rules for when it is safe to erase those footnotes and replace them with "a new axiom is safely introduced here".)

A "computationalist" (as you put) is actually not obligated to believe in finite mind -- whatever that might mean, exactly. Instead, a computationalist is just regarding the trade in mathematical proofs as a kind of programming: every set of axioms and sentences is a program to generate sentences. The set of truths (provable sentences) conveyed by the system is the least fixed point of that program. The operation of constructive math rules to expand the set of sentences in the system always gives you an equivalent program. The use of the law of the excluded middle changes the program, preserving only the invariant that the new meaning (the new set of output sentences) is a (possibly improper) super-set of the original meaning (possibly the uninteresting null meaning that includes all sentences over your alphabet or WFF grammar).

That's all that all the fuss is about. People didn't see that property of the law of the excluded middle coming, probably because we seem to have simple boolean logic practically "hardcoded" as a basic faculty. That faculty, if it exists, wouldn't be a bad choice for evolution to make because *most* of the rules reliably lead to consistent reasoning and it's only some that cause problems: if we instinctively apply the rules without discrimination and they all come up about as often we'd come out ahead. But, then we're surprised when the intellectual realization first dawns that those intuitions are slightly wrong (another kind of "blind spot").

-t

Least fixed or least finite?

At the end of the day, it's all constructivist at heart, though.

I think the practical importance of computation is starting to have an impact, but I don't think I can agree with this. Perhaps we have different notions of what is at the core of constructivism. For example, there is no computational construction of an uncountable set, so any math that relies on the "reality" of the continuum is not admissible in such a framework. (I believe that there can be constructive replacements for most of these, but that's another story.)

I've been reading Barendregt's LC book, and even he uses non-computational methods to prove that there are uncountably many consistent extensions of theories of the LC.

So I find it hard to believe that there is such a deep willingness to restrict pure math to constructivist limitations.

The use of the law of the excluded middle changes the program

Actually, I think way too much attention is paid to the law of the excluded middle as the crux of constructivism.

I think the core issue of computational constructivism is how one thinks about infinity. The inadmissibility of the law of the excluded middle is actually a by-product of that.

In a computational context, infinity manifests as either mere non-termination (not meaningful) or arbitrary finite approximation (meaningful). Unfortunately, one can't tell these two apart in principle without making some kind of additional interpretation, so by definition infinity introduces indeterminate truth values into a computational system.

If I restrict myself to finite acyclic structures, I can always rig my system (by making it "big enough") so that the law of the excluded middle does hold.

This actually links to the discussion in the other thread about determinism/non-determinism: so long as a transaction is finite, I can always retroactively make it deterministic by making sure it is "big enough", i.e. has enough information.

Furthermore, it is very easy to construct mathematical systems that drop the law of the excluded middle, but which still over-generate mathematical objects from a computational point of view.

constructive infinities

Perhaps we have different notions of what is at the core of constructivism. For example, there is no computational construction of an uncountable set

Probably we have different notions. It depends what you mean by "set".

Actually, I think way too much attention is paid to the law of the excluded middle as the crux of constructivism.

I think the core issue of computational constructivism is how one thinks about infinity. The inadmissibility of the law of the excluded middle is actually a by-product of that.

Infinities are kinds of rules of induction. Consider something like Dedekind cuts. You can constructively prove that there are all these various limits and that they act like "real numbers" (while you are strict about what you mean by "act like") but you have to make an unjustified leap to say that these things you made -- these limits -- are the same kinds of things as the rationals we started with and that the rules for rationals consistently apply to the new numbers -- that leap amounts to using excluded middle, via the axiom of choice, to add a new axiom to your system. And you can easily get in trouble.

But you can easily avoid trouble. The famous "On Numbers and Games" is an example of a constructive system containing not only infinities but transfinite constructions! At root, the whole time, it's fairly explicit, we're only talking about finite inductions over a carefully extensible axiom system.

Our "infinities", when properly formed, are just the proper names of some well-known infinite-loop programs.

-t

So close...

It depends what you mean by "set".

Not really. The key word is uncountable: any uncountable thing is by definition uncomputable, i.e. they aren't even finitely approximatable.

See, even you, who claims to be a constructivist, can't relinquish non-computational notions of infinity. ;-)

Forced to disagree

Consider this proof, in the constructive Calculus of (Co)Inductive Constructions, that the set of infinite boolean streams is uncountable. This proof was developed and tested in Coq 8.1pl1.

Now, you might convincingly argue that a type as defined in type theory is not the same as a "set", but you've gone a step further to assert that any uncountable "thing" is inherently non-constructive. I don't think that position can be defended.

What uncountable thing is constructed?

Consider this proof, in the constructive Calculus of (Co)Inductive Constructions, that the set of infinite boolean streams is uncountable.

The sensible constructivist interpretation of this proof is that they have proven that you cannot construct the collection of all infinite boolean streams, or even some collection of things that could arbitrarily finitely approximate it.

(BTW, this essentially the same as a constructivist interpretation of Cantor's diagonalization)

This makes sense: we know from that there can only be countably many computable (recursively enumerable) boolean streams.

For a computationalist, "uncountable" is just a special way to say "impossible".

I defined the type of

I defined the type of infinite boolean streams and demonstrated that it cannot be placed in a bijection with the natural numbers. It seems clear to me that the types of constructive type theory are mathematical objects in their own right, ie, "things". If the type describing infinite boolean streams is not in some sense "the collection of all infinite boolean streams", then I'm not sure what would be.

As another poster stated, this discussion hinges on what it means to "be a collection" or to "be a set". I personally think that the types of type theory qualify, but I gather that you think differently (in which case, the discussion is really just a philosophical difference of opinion about the semantics of the phrases being used).

If you require that some grouping of things be recursively enumerable before it can be called a "collection," then obviously the infinite boolean streams don't qualify, and we have to commandeer some other synonym of the word "collection" to describe them. I think you'd be standing quite nearly by yourself if you adopted that position.

A definition is not a construction (or a computation)

the type describing infinite boolean streams is not in some sense "the collection of all infinite boolean streams", then I'm not sure what would be.

You described the type of an infinite stream, and you have yet to populate it with even one constructed instance, let alone all of them.

I can define in great detail Santa Claus, a purple unicorn, and an Oracle for the halting problem, but this doesn't mean I can produce one.

If you could show me even one single completed infinite stream, I'd be impressed enough to grant your point.

I think you'd be standing quite nearly by yourself if you adopted that position.

As a claim about computation? On the contrary, I think mine is the orthodox position. As a claim about mathematics, you're right, but my whole point is that they are very different things.

constructivist orthodoxy

Marc, I think I figured you out!

Your take on the orthodox view is that it is strictly about terminating programs, perhaps composed together to form other terminating programs.

But wait! Provably, even, we can take programs that are not terminating but that do keep producing more and more output (like me, in this thread -- sorry, it's a favorite topic). And, provably, we can construct *effective* procedures out of certain combinations of *non-effective* or *uncertainly-effective* procedures. Your orthodox view seems to have missed that, so to speak.

"Infinity" is an adjective to describe some non-effective constructs that can yet be used to make effective constructs. Then you can split hairs between various kinds or layers of infinities.

-t

Not a finitist

Your take on the orthodox view is that it is strictly about terminating programs

Definitely not. I described exactly what I thought infinity meant in a computational setting: either non-termination or arbitrary finite approximation (i.e. I can always get one step closer to what I'm defining). As I pointed out, you actually can't tell these two apart in general without adding additional interpretation.

Both are countable: they happen one finite step at a time. To build something uncountable, you'd have to get to the end of an infinity and then start another.

An example boolean stream

Using the definitions of the formal proof above, the following is the unending stream of "false".

CoFixpoint falseStream : boolStream := BCons false falseStream.

I'm not sure what you mean by a completed stream, but this certainly inhabits the type.

As to your point about computation, I might be inclined to agree with you, but only if we restrict the meaning of "computation" to mean "deterministic computation." If we admit non-determinism in our model of computation, then a single co-inductive program can describe all the uncountably-many infinite boolean streams (the program that forever non-deterministically choses a boolean value with non-zero probability for each outcome). I'm going to explicitly avoid the question of whether non-determinism is constructive. :-)

Damn, I left my infinite loop running; I guess I'll just wait...

CoFixpoint falseStream : boolStream := BCons false falseStream.

I'm not sure what you mean by a completed stream, but this certainly inhabits the type.

That is a specification for a program that if executed would inhabit the type. To complete the stream, run the program. When it's terminated successfully, having produced the whole sequence, I will grant that you have produced an inhabitant of the type. ;-)

If you like, a different approach is to say that the type you have defined is the type of specifications for computable infinite streams. If you take that interpretation, then by the usual diagonalization, there are only countably many of those. (An alternative proof is simply to note that all the specifications are of finite length in a finite alphabet, and ipso facto can be only countably many.)

Do you see where I'm coming from yet?

Do you see where I'm coming

When it's terminated successfully, having produced the whole sequence, I will grant that you have produced an inhabitant of the type. ;-)

Well, the formal logic system accepts that statement and admits the object thus described as being of type "boolStream." The evaluation procedure, when applied to this object, also halts immediately. It seems that you have some strictly more restrictive definition of "type inhabitation" in mind than Coq does.

Do you see where I'm coming from yet?

I think so; you seem to reject co-recursion and co-data as properly "computational." If that's your stance then we may have to agree to disagree. I don't see the above as a specification for a program. It is a (co-)program which produces, upon demand, a series of boolean values. In this particular program, all the values will be "false."

Coinduction junction

you seem to reject co-recursion and co-data as properly "computational."

Not at all. The whole trick with co-induction is that it doesn't construct anything: it allows you to step by step inspect a structure that you don't have a global view on. (In particular, you don't know if it will terminate or not, or what rule has been used to construct it.)

As another poster stated,

As another poster stated, this discussion hinges on what it means to "be a collection" or to "be a set".

Aczel seems to get into this in the first paper of Volume 1 under "Systems Theory" linked below, with his idea of a replacement system. He goes on to relate this to well founded and anti-well founded ZFC. It seems that Aczel and Barwise have a very "special" view of what a set is. This seems to explain how Aczel is defining a set? Wish I knew more about this.

the uncountable constructable

Now you're "so close"....

Consider all constructable programs that, even though though they might not terminate, will only ever output a series of bits to the left of a decimal, followed by a nonrepeating set to the right, followed by a repeating set to the right of that. None of the three substreams of bits is bounded in size and we are including here programs that won't terminate but that will keep spitting out bits.

Now consider programs that read one or more of those outputs and (*if* they terminate) are either relations (producing a single bit of output) or mappings (producing another bitstream as output).

The enterprising constructivist will dutifully identify a subset of these programs for generating numbers, computing relations, and computing mappings. Soon, for example, he might carve himself out the field of Rationals.

Some interesting non-terminating programs can now be written and interesting things can be proved about them. Two examples would be the programs "e" and "pi" which, though they never terminate, can be used as input to the relation-computing program ">=" if the other argument is one of our rationals. Indeed, we can *characterize* a set of non-terminating programs -- proving that each candidate is or is not an example of such program -- where those programs amount to the real numbers. If we add all of these programs to the mix, three things of note happen:

a) none of the programs that pass our test get ever produces output inconsistent with the whole mess of them forming the field of Reals.

b) in some cases (some non-provable, presumably) particular uses of the programs will never terminate, thus we don't have an *effective* field, just one that never goes wrong when it is effective.

c) because of proofs similar to the one cited about infinite binary streams, we know we can repeat this trick endlessly.

Conway does roughly that. He writes a non-deterministic program to enumerate the rationals. Primordially, only 0 exists. In one step, the program creates 1 and -1. In two steps it creates 1/2 and 2 and -1/2 and -2. Each of the dyadic rationals can be created in a finite number of steps. It is a bit poetic to say but an accurate characterization of his algorithm that after omega steps, all of the Reals are created along with + and - infinity. You could take a few more steps after that or, you could talk about that programs that do an infinite number of infinite step constructions and infinity-squared, etc. And you can keep doing this.

But, again: you provably can't enumerate all of the programs that satisfy Conway's axioms. Yet no matter how many programs of that sort you make, they all obey the algebraic laws he works out. And, yes, you can make effective programs that know how to add infinity squared to 1, etc.

The layered infinities don't mean we finish constructing an infinite number of programs in finite time -- it just means that we can't effectively enumerate all of the programs we can construct in finite time.

This should be comforting to mathemeticians, and also challenging:

It means that not-even in principle, and not even given huge numbers of supercomputers (dwarfing the human population) - not even if we used those to systematically start enumerating every provable theorem in the ultimate catalog -- EVEN THEN there is still infinite(!) room for human creativity in mathematics.

-t

p.s.: understand diagonalization -- it's saying that if you think you have something that enumerates all the programs that produce infinite sequences of bits, then give me your enumerator and I'll give you back another that makes a sequence yours can't produce.

You can't write an enumerator for programs of that type because you can always outconstruct any enumerator. Therefore, that type is an infinite collection.

-t

What was the face of an uncountable before it's mother was born

thus we don't have an *effective* field, just one that never goes wrong when it is effective.

Yes, and is only effective a countably infinite number of times.

EVEN THEN there is still infinite(!) room for human creativity in mathematics.

If you have read this thread, you will note that I have explicitly excluded (pure) mathematics from consideration here.

it's saying that if you think you have something that enumerates all the programs that produce infinite sequences of bits

And I'm saying (and so is Turing, and Gödel, etc.) that if you think you have something that produces even one completed infinite sequence, let alone all of them, you have left the realm of effectiveness, i.e. computation, entirely, and long before your interlocutor gets to build his diagonal.

Seriously, it sounds to me like your agreeing with me about computation (where I'm making my claim) and disagreeing in the realm of pure math or philosophy which are much "bigger" places, both of which I have excluded from discussion as a point of stipulation.

last try (infinities)

Start by thinking of "infinite" not as a description of the output of a program, but as a qualifier that helps determine the algebraic properties of a program. For example, we can prove that "% yes | head -5" terminates after at most 5 output lines in classic unix -- and we could prove that using some lemma that talked about loops of the form "while (1) ...." such as we find in the "main" of "yes". Just as a technical term, if nothing else, we could call such loops "infinite loops". So, for starters, think of "infinite" as a property that *programs* might be found to have, not their output.

But then it gets complicated because we have to talk about orders and limits.

Let's suppose I define some construction rules for some guaranteed effective procedures, and then define an order over their outputs. If I'm clever, the construction rules for the programs will "parallel" the order in such a way that I get a useful definition of limits. For example, if my programs construct rationals in certain ways, I'll get a useful definition of limits that point in the same direction as classic constructions of the reals -- a rule for always constructing the "next program" in a series of programs whose outputs follow some ordering constraints.

Those rules for limits could be given names. Any particular limit rule that I define I can give a name.

Having done that, I can extend the domain of inputs and outputs to my programs to include the names of these limits. In the resulting algebra of programs, I can have some further extended sense of order among outputs.

Now I'm back in the original situation - with this new order over values that include the names of constructable limits from the previous situation - now I can have an extended order and still more new kinds of nameable limits.

So, I went from terminating programs, to those plus some infinite loops, to those plus some infinite loops of infinite loops. And while my set of programs constructed under these rules is no longer universally effective, nevertheless I can do effective computations with those infinite values that I was able to construct.

So again: "infinity" isn't a description of how much magnetic tape it takes to store a value -- it's a description of an algebraic property of constructable programs.

-t

p.s. useful infinities

if you follow -- and i get that your in some fraction just bating me -- if you follow me here a reasonable criticism you might raise is:

What point in naming these limits? Just deal with them directly! Don't postulate them as part of some "infinity" cause that ain't mainfest.

But no: like imaginary numbers, the infinities are really useful for their definitional algebraic properties. For a fantastic demonstration of the sort that any "computationalist" should drool over, look at how infinites are used in part 1 of the aforementioned ONAG. The appropriately named infinities (e.g., games with infinite and transfinite birthdays) are palpably useful (and intuitively familiar) in effective computations.

-t

Pointful contrarianism only

and i get that your in some fraction just bating me

Not in the slightest. I long ago lost my taste for arguing for fun; I now only argue because I think it matters.

In many ways, I haven't said anything in this thread that isn't Computer Science 101 (or maybe 202), but I think you have proven the need for my point: many people have forgotten that mathematics and computer science, though related are not the same thing.

I know this because I've just remembered it myself, having spent a good deal of time studying pure math concepts in the hopes (at least in part) of gleaning computational ideas and truths.

But the truth is that most mathematics are not computational. This doesn't make them meaningless or un-useful in their own domains and in their own ways; it just means that you can't necessarily get a sensible computational idea out of all mathematical ideas.

Evidently, I'm not the only person who needed reminding.

Category Theory

This is perhaps an allure of Category Theory for computer scientists. Much of category theory is much more constructive than "standard" presentations.

The hidden danger is the most dangerous

Much of category theory is much more constructive than "standard" presentations.

I think CT just makes it easier to squint so that you can pretend the non-constructive parts aren't there. ;-) In some ways, that makes it the most dangerous presentation for computationalists...

The natural indentification between operations and arrows is pretty handy though.

Do or do not, there is no try

Start by thinking of "infinite" not as a description of the output of a program, but as a qualifier that helps determine the algebraic properties of a program
<snip>
So again: "infinity" isn't a description of how much magnetic tape it takes to store a value -- it's a description of an algebraic property of constructable programs.

We are in complete agreement on this. Where we differ is how "large" an algebraic property can be by definition.

the truth about formality

mathematical truth is that which can be formally demonstrated by construction

The consistency of your formal system is well-motivated and thoroughly tested. It can be formalized, and it is (by Gödel) equivalent to the non-satisfiability of a Diophantine equation. So there you have it: a mathematical truth which cannot be formally demonstrated.

I used to find the constructivist notion of truth appealing, but have since come to the opinion that the denial of the infinite etc. amount to a stubborn opinion that "mathematical truth" should be defined differently from normal truth. To those who use a non-standard meaning of "truth",

neither the truth or the falseness of Goedel's constructed sentences actually exists.

One of Gödel's constructed sentences is logically equivalent to another sentence which expresses the consistency of his formal system.

Attempting to keep this tied to l-t-u, when I say that your formal system has been "thoroughly tested", examples of that are the type systems of programming languages such as Haskell. People know the consistent subsets of these systems, and rely on them daily.

Doesn't Godel's theorem also apply to intuitionistic logic?

My understanding is that Godel's theorem is about the axioms of a formal system, not the deduction rules. It says that you never have enough axioms. Does it not apply also to systems based on intuitionistic logic?

In classical logic, mathematicians have actually found simple statements in arithmetic, which can be shown to be unprovable from Peano Arithmetic, but are provable from set theory (a stronger axiom system). See Goodstein's theorem and the Paris-Harrington theorem.

Is there some reason why similar examples couldn't be found for intuitionistic logic?

Maybe...

My understanding is that Godel's theorem is about the axioms of a formal system, not the deduction rules.

I'm not sure that this is as clear-cut a distinction as it sounds. In some senses, a logic is just a set of deduction rules. Likewise, in some senses axioms are just introduction rules that tell you how to derive objects in your formal system.

When the system you are studying is a logic itself, the lines are even more blurred.

Computationally, this distinction is blurred axiomatically. ;-)

can be shown to be unprovable from Peano Arithmetic, but are provable from set theory (a stronger axiom system)

What if you can also find a stronger system where they are provably false? Which one is right?

Embedding in a larger system only works if you give some special weight and meaning to the larger system to begin with. Otherwise, you are left wondering what you have gained with this new "truth".

Another thing to consider is that the larger system is likely not computational: ZF set theory for example.

Computationally, only computable axioms are admissible. ;-)

(not) Maybe

In some senses, a logic is just a set of deduction rules.

This is a very "syntactic" view, a very "tired" one probably not going to bring much more insights.
There are more creative ways to see logic(s) : What is a logic, and what is a proof ? (Lutz Straßburger).

As for the classic approach on computation the "unassailable" results are in there: Theory of Recursive Functions and Effective Computability, may be you have it already.
In such works it is just the "metaphysical perspective" which is wrong not the proofs or results.

How would one know?

This is a very "syntactic" view, a very "tired" one probably not going to bring much more insights.
<snip>
Lutz Straßburger

I'm familiar with the work of the "Frogs" group in Proof Theory.

My position isn't so much syntactic as denying any fundamental difference between syntax and semantics, in the sense that they are both just logical structures. I'll go further and say that I think this is one of the core lessons of computation.

Normally, which of two structures with a map between them is considered syntax and which semantics is partly historical and partly a result of which one "seems most intuitively solid". Generally we expect the semantics to be "bigger" informationally than the syntax, and to give an "intuitive" meaning to the syntax, but, hey, what does that really mean?

In such works it is just the "metaphysical perspective" which is wrong not the proofs or results.

Your right, but I'm trying to neutralize the metaphysical problem by recasting it as an epistemological one: computation is a fairly objective "theory of knowledge". Anything that is invisible to it (whether you want to assign "independent reality" to it or not) can be safely ignored for computational purposes.

I think within the scope of LtU (programming languages), this is the appropriate default position to take, and that when things (such as uncountables, Platonism, metaphysics, etc. ) creep in from outside the scope of computation, it just muddies the waters.

Wooops...

My position isn't so much syntactic as denying any fundamental difference between syntax and semantics,

I strongly disagree.
The "semantics" shows up whenever we are unable to cast our (naïve?) intuitions into a solid syntax.
Your assertion that "which of two structures with a map between them is considered syntax and which semantics etc ..." only applies to the remnants of failed attempts to formalize semantics.

Without indulging into Platonism, metaphysics and assorted murky philosophical stances is it not the case that we actually want to "mean" things which are beyond the bare syntactic statements we use to talk about them?
The simplest case is the idea of a "number", whenever we talk about numbers and arithmetic we would be willing to make definitive statements, irrespective of the notation and axioms since it seems so "obvious" to us what a number is.
Alas, we all know that we cannot "properly" axiomatize arithmetic and be independent of notation.
This inability is of course the source of all the anguish about foundations and fuels the interest in "metaphysical approaches".
I do think this is indeed misdirected but OTOH that we cannot just ignore this problem of mismatch between "natural intuitions" and formalized languages.

Reminds me of a quote from Warren McCulloch :

"What is a number that a man may know it, and what is a man that he may know a number?"

Activity theory

There is a "commonsense" approach to numbers and math that begins by saying that everything is the result of "activity", closely related to Cybernetics and "activity theory", and Pragmatism. A number is always the result of a process of counting, logic is something that must "happen" by following a process. In this way of thinking everything is an activity. This view seems to be completely consistent with the math we are talking about, but doesn't rely on it. Practical problems are normally understood this way. Anything that can happen can be programmed by following the steps.

Pragmatism

A number is always the result of a process of counting, logic is something that must "happen" by following a process.

Of course but restricting the view of logic and maths to this means that we get stuck in the "first order" whereas the interesting (and productive) questions are meta-questions, i.e. how does this or that logic or theory compares with that other one.

When trying to formalize such comparisons in the "classical" setting of set theory we run into the cardinality problems due to the power set axiom applied to infinite sets.
So the question I deem THE important one is, how do we tackle those hierarchies of "theories about theories" without getting lost in high-order infinities and ontological conundrums akin to questions about "the sex of angels".
How do we keep our practice firmly grounded in Activity and Pragmatism while developping our ability to deal with "higher order" interesting questions?

As for the pragmatism of C.S. Peirce, a while ago I have been discussing quite a bit about this on both the IEEE Standard Upper Ontology mailing list and Conceptual Graphs mailing list, most often siding with John Sowa, but my opinions about Peirce and the relevance and roles of ontologies have diverged widely since then.

This thread is getting more and more remote to the subject matters relevant to LtU, if we have to pursue this discussion may be we should move it to another place.
Does anyone has a blog or suchlike focused on these matters?
I don't really have a blog myself but I happen to have setup a "scratch" one for absolutely unrelated purposes and which stands empty.

Foundations

This thread is getting more and more remote to the subject matters relevant to LtU, if we have to pursue this discussion may be we should move it to another place.

In a thread about Gödel and Nagel, this doesn't seem so out of place to me. PLT needs foundations just like mathematics does, and of course, the two have close connections.

Does anyone has a blog or suchlike focused on these matters?

What would the description of such a blog's focus be?

I'll second that this seems

I'll second that this seems on-topic enough.

Synthesis

"Activity" thinking, or more precisely the word, is entering the computer language arena in a big way. The word is used is UML and also in "work flow". So far "work flow" and UML are not programming languages, but an activity representation programming language is not hard to conceive of. Also "intelligent agents" are surely activities, but one could start with activity without introducing agents as such. But that is another story. And surely "reactive programming" is about activity. But does anyone really understand "activity"? This is where Pragmatics, and the imported Russian "Activity Theory" and also General Systems Theory are useful, as well as the other topics of this thread.

This is an area in desperate need of synthesis and the math may be just the right thing here. I think of it as an extension of Ashby work from the 50's and 60's. The topics of this thread fix a number of problems with Ashby's Cybernetics. Not only math but a "scientific" use applied to "computing" and "computers" wherever they occur.

Observer activity

But is "activity" really a basic system theoretical concept? Modern approaches of so called "second order cybernetics" ( v.Förster, Maturana, Luhmann ) use difference theoretical arguments and presuppose an observer making distinctions. So when "counting" is an activity, how do we distinct it from others? How can we observe someone who is "counting" ( Wittgenstein has tortured himself with such questions )? How does an observer know that he counts? He can only distinct it from other distinctions. As it seems activity and formality, the action and the idea always have to be co-present and a systems theory would reconstruct those dependencies. The integration of the observer into the system that constructs itself from its own activities ( autopoietic circle ) can be considered somewhat handwaving as the continuation of subject philosophy being prefigured by Hegel.

I'm not entirely sure what it all has to do with software and programming languages? Where is the observer? There is this funny cartesian analogy by Wadler who equates FP with the res cogitans, the world of state and effect with the res extensa and monads with the pineal gland. Damned, this is so 19th century. Might computing science be finally nostalgic - the last refugee for classical ontologies? I'm getting bored...

Well, basically your agenda

Well, basically your agenda seems to go way beyond the needs of simply defining activity. As far as this discussion is concerned an activity is simply a behavior, probably persisting for some period to be interesting. An activity causes something to happen. The definition doesn't need to invoke software, but in our age any useful activity would probably have programmed parts. Keeping in mind that in order to meet the requirement of doing something there will be "boundaries" and often physical devices at the inputs and outputs. An activity must carry out complex actions across these boundaries and be able to read back, or parse conditions beyond these boundaries. The environment being the other side of the boundary.

There is no need to discuss issues of "intelligence". Some people may want to talk about "intelligent agents" for instance, but agent systems are not normally "intelligent" enough to warrant the use of the word. The practical goal of activity should be to effectively deal with an environment. I think of activity as phase zero in the quest for intelligent systems. Quite some time ago I wrote an article on this you can find here.

By the way I don't like defining things as if there were nothing else to it. Properly defining activity in large software systems seems like a challenge to me, but I think it could be done on a case by case basis using the ideas of systems thinking. Systems require boundaries and information exchange across boundaries. In a way you can't define software unless you can define the activity(s), and the system(s) and the boundaries.

Edit: A boundary can probably be defined as an information exchange. If the exchange is determinate there is no information and no boundary.

Synthesis?

Anton van Straaten

What would the description of such a blog's focus be?

Something along the line "Pragmatic Foundations of Computation"?

Philippa Cowderoy

I'll second that this seems on-topic enough.

We are eagerly awaiting your contributions tho... :-)

Hank Thediek

But does anyone really understand "activity"? This is where Pragmatics, and the imported Russian "Activity Theory" and also General Systems Theory are useful, as well as the other topics of this thread.

Mmmmm...
Not quite the direction I would lean to, Psychologism in maths or computing isn't likely to be of any help when it comes to formalisation.
It may be usefull for practical purposes, interface design, ergonomy and the like or else for enhanced creativity in the "ruminations" phase as reported in The Psychology of Invention in the Mathematical Field by Jacques Hadamard.
But I contend that General Systems Theory is fluff and absolutely worthless bunk.

The topics of this thread fix a number of problems with Ashby's Cybernetics.

Interesting, could you elaborate?
Ashby's works weren't so much "problematic" than incomplete due to this being seminal work in the field. Cybernetics fell unjustifiably (IMHO) out of fashion.

Information Theory

Of course but restricting the view of logic and maths to this means that we get stuck in the "first order" whereas the interesting (and productive) questions are meta-questions, i.e. how does this or that logic or theory compares with that other one.

This is an interesting problem, and I won't pretend to really understand it. But what about information theory? Isn't that a metatheory of classical systems? If this makes no sense please disregard.

meta-questions

This is an interesting problem, and I won't pretend to really understand it.

You'll probably at ease with this if it is restated as "how does this or that program or algorithm compares with that other one".

Whenever one want to ask such kind of questions, and beyond the nastyness of the details, one inevitably meet the dreaded higher infinities lurking in the domain and ranges of the analysed programs.

The epitome of this being the "famous" halting problem.
Which can be stated thus, there are (according to conventional ZFC wisdom) too many possible programs (alleged functions) from N to N for a program of finite size running a finite number of steps to be able to sort out the "correct" ones from the looping ones.

But the halting problem is not really an interesting question, aiming at a too broad scope.
Being able to compare algorithms and implementations would be more usefull, alas the same "theorerical limits" pops up in there too.
The interesting problem is how much can we venture into these questions while keeping within the bounds of feasibility.

But what about information theory? Isn't that a metatheory of classical systems?

Not really a meta-theory and anyway information theory is about finite quantities of information.

I wouldn't want to offend you but I had a look at your activity paper and it appears as "yet another flavor of GOFAI".
The question of control versus rules is indeed an important practical one but doesn't touch much on the theoretical aspects.
If you are still involved in expert systems do you know about ripple down rules, it's the only noticeable improvement I have seen over the GOFAI of the 80s, see also Debbie Richards and Paul Compton the initiators of the idea.

General case vs. specific case

Being able to compare algorithms and implementations would be more usefull, alas the same "theorerical limits" pops up in there too.

Of course this only applies in the general case: there is no "one size fits all" solution to this problem.

In individual concrete and practical cases, though, it is often fairly straight-forward to prove that two relatively small programs have the same behaviour. (Informally, this is the basis of the whole idea of refactoring.)

Perhaps this is exactly what you mean, but I'm not sure, because you seem to be alternating between a strongly constructive, pragmatic position and a very broad philosophical one. Or maybe it just seems that way to me. ;-)

Usefull or Intelligent

I wouldn't want to offend you but I had a look at your activity paper and it appears as "yet another flavor of GOFAI".

I am not offended, in fact I consider it a complement. But to be more specific it is an "activity flavor", by now this is probably post GOFAI but not really "with it" any more. I don't see it as a solution to the "AI problem" whatever that may be, but good computing. Intelligence doesn't exist independent of the world we live in, and the more computing is linked to that world the more usefull it will be. The problem is still getting data in a computable form, not using the data.

Relationships

Not really a meta-theory and anyway information theory is about finite quantities of information.

I guess what prompted my question is the way the information measure 2^N shows up in set theory (cardinality?). I am not suggesting that classical information theory would be useful as is. But perhaps extended some how. In fact there does seem to be a relationship.
Edit: For exzmple Ashby's requisite variety "theorem" on page 206 here is of course based on bits per second, but could be expressed as total bits (ie cardinality) this would seem close to what Aczel is doing. But don't ask me for the details.

Ripple down rules look interesting I will probably look into it.

minds and machines ramblings

I finally ended up creating a blog to debate such topics, in spite of my lazyness... ;-)

Free Books

While reading the Co-Logic slides I found these books, incredibly free, and on line: Situation Theory by Peter Aczel and Vicious Circles by Moss and Barwise.

Edit: Actually this is Aczel's original 1988 paper: Non Well Founded Sets introducing the provocative notion of the Anti-Foundation Axiom, AFA.

AFA

Non Well Founded Sets by Peter Aczel and Vicious Cycles by Moss and Barwise

I've read both, and I have to recommend Barwise and Moss as an excellent and thought-provoking read. (I've actually read it through twice I like it so much.)

BTW: it's Vicious Circles

Aczel is probably a bit more advanced; you may want to read at least part of the other one first.

Bourbaki set theory

I wonder if anyone out there knows the relationship between "Bourbaki" set theory and ZFC minus WF plus AFA. This is a bit historical and about connecting the dots. W. Ross Ashby suggests the use of Bourbaki set theory as a basis of Cybernetics in "The Set Theory of Mechanism and Homeostasis". Ashby notes: "That the method makes intuitively obvious how the quantity of information would be measured is not the least of its advantages."
BTW: Cybernetics is a "systems theory" that preceded the modern versions based on Aczel and Barwise.

Identity

Well, I think I just answered my own question: "Set Theory is Self-Contradictory". In Ashby's defense, he both condones the use of identity and explains what is wrong with it!
BTW: The link is a .doc file.

Inconclusive as usual

If I may interject in the dispute between Marc Hamann, Thomas Lord, Rob Dockins and Hank Thediek.
All this looks like it has been rehashed over and over and may be there is something flawed in the approach itself rather than in the (putative...) "conclusions" of each of the parties.
Namely the focusing on ontological qualities of mathematical entities, procedures, practices, etc...
Does such or such set, collection or infinite "exists" or not, isn't likely to be the "right" question to ask.
It seems mathematicians of all "metaphysical" persuasions are nonetheless doing valuable work which they do recognise as valuable in spite of strongly incompatible foundational ideas and that heated discussions only arise when the more "philosopic" subject matters of what mathematics are or mean come to the fore.
If you still want to keep with this fruitless haggling I suggest you at least look at more recent works, like Nik Weaver's Mathematical conceptualism, which BTW is perplexing for people like Solomon Feferman and John Baez.
I think that the approach of Andrej Bauer and als is more likely to bring productive insights into mathematical practices rather than ages old "scholastic" disputes about ontology.

Aquinas can have his angels

Namely the focusing on ontological qualities of mathematical entities, procedures, practices, etc..

I think you are missing the point, Jean-Luc. In computation, there is a clear, objective standard of ontology: can I write a finite program that will produce "it" (whatever "it" may be) as output in either a) a finite amount of time or b) in an infinite amount of time, but I can stop it on any step and I will be closer to it on step n+1 than I was on step n.

I won't claim there is no fuzziness in this at all, but compared with pure mathematical ontology, or philosophical ontology, this is pretty darn concrete and unassailable.

angels...

I don't think I am missing anything Marc since I agree with all of your reply but the "missing the point" statement.
To prevent possible further disagreements let me restate what I understood.
- Every program is a finite object.
- Every computation is a finite object too even in case it is a endless approximation (like the decimals of PI).
If you deem this to be the ontology of computations, of course "this is pretty darn concrete and unassailable" but you are overlooking the distinctions between a program, an algorithm and a function.

Every function has a countable infinity of possible algorithms.
Every algorithm has a countable infinity of possible implementations (programs).
Also, every value, be it an input or an output, has a countable infinity of possible encodings.

Therefore the ontology of functions and values, the "real semantic" of programs isn't that well "concrete and unassailable", it deals with infinite equivalence classes over infinite domains.

These are the "angels" which we at loss to circumscribe and tame.

What I am criticizing is the attempt to elucidate the nasty questions of what are or what mean computations or mathematics by the old scholastic ways of pondering over what "is" and "is not", high order cardinals, transfinite, and blah, blah, blah...
Do you have to "believe" in the continuum hypothesis or not?

This is USELESS, USELESS, just go the opposite path, start with what we are actually able to do (intuitionist like).

It seems to me that research on Finite Presentations of Infinite Structures as exemplified by the works of Blumensath, Khoussainov, Colcombet has some chance to clear up the matter AND to have some impact on the actual practice of maths and software engineering.
But who knows...

The middle path

First of all, thanks for the references, Jean-Luc. There is some interesting stuff there and I look forward to exploring it further.

This is USELESS, USELESS, just go the opposite path, start with what we are actually able to do (intuitionist like).

I see that I misunderstood your previous post. You're actually stricter than I am in the ontology you are proposing! ;-)

My aim in this thread is more modest than yours: I simply want to make an argument to other mathematically-inclined people to be careful in transferring knowledge they've gained mathematically into their computational reasoning.

Someone who isn't ready to accept that modest positon yet probably won't find your position very sympathetic at all. ;-)

I'll probably continue to straddle the fence in practice: impossible things are fun to think about. ;-)

I'm just going to be more careful to separate the two things.

fun

impossible things are fun to think about. ;-)

Then you will relish Martín Escardó's Seemingly impossible functional programs

Lots o' fun, but quite possible ;-)

Then you will relish Martín Escardó's Seemingly impossible functional programs

Thanks for the link, though I had already seen this via sigfpe's blog.

Having also had a chance to look at your other links, I can report that a lot of it gives me the same reaction as I had looking at the "impossible" function: if you are a computational constructivist, these things are obvious. ;-)

(Don't take this as a criticism of these proofs: I'm fully aware how much brilliance can go into rigorously prooving the "obvious".)

What is being shown is to my eye how strong a constraint on infinities co-induction is; one can interpret the impossible function as a proof that co-inductive infinities are actually "smaller" than countable ones, in the sense that each finite approximation represents a finite partition of "uncountable sets", so at each step you are covering way more space than the corresponding induction. (Kind of like collapsing a large space of quantum possibilities down to one when you observe it...)

Furthermore, computable infinites are actually more constrained than computable finites (i.e. all finites), because in a sense they have to be predictable in some sense. (And we start off knowing that "uncountably" many are not computable.)

I'm very excited to see that so much work has been done to prove from first principles ideas that I have been taking as "axiomatic."

To devil's advocate though on the side of our friendly neighbourhood pure mathematician, he has an easy proof that these proofs don't work in pure math without constructivist assumptions:

By axiom of infinity, N exists.
By exponentiation in Set, 2N and 22N exists.
By Turing, there are uncomputable elements of 2N and 22N.
Let X ∈ 2N (a boolean stream) such that X is uncomputable.
Let φ ∈ 22N (a predicate on boolean streams) such that φ is uncomputable.
Anything uncomputable is not computable in finite time.

Ergo, ∃ a predicate and an infinite binary stream such that the predicate is undecided in finite time.

In other words, unless they are willing to restrict themselves to computational methods this won't fly.

However, these great proofs might give them a reason to want to restrict themselves in this way...

Link

Here is an interesting paper that seems to deal with all aspects of this thread: "Coinductive Models of Finite Computing Agents"

Interesting paper! I thought

Interesting paper! I thought this looked somewhat familiar, and a later paper by these same authors was my first post on LTU! Good times... :-)

Seems they're continuing with the same theme that Church-Turing is a weaker model of computability than their "Persistent Turing Machines" based on coinduction I believe.