Most powerful terminating semantics?

I was curious if there was any sort of consensus for what the most powerful semantics known so far that are still guaranteed to only allow one to write programs that terminate? Phrased another way, languages that are "as close as possible to being Turing complete without actually being Turing complete." Most powerful in this context meaning something like, "the language for which their exists a function mapping all possible programs written in it to equivalent programs in the domain of all-possible-programs-that-could-be-written-in-a-Turing-complete-language with the largest image compared to all other such functions for other languages".

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Stating the obvious

I get the gist of your question, but to answer a related question you didn't ask just so we don't start going down the wrong road:

There can't be a "largest" such language L, because if there were then there would exist terminating programs which could be written in a Turing-complete language but not in L. Without loss of generality, choose such a program, P. Call L' the (non-Turing-complete) language L extended with a primitive which implements P. L' is "larger" than L. Rinse and repeat.

Diagonalisation

There can't be a "largest" such language L, because if there were then there would exist terminating programs which could be written in a Turing-complete language but not in L.

This sure looks like a standard diagonalisation-style argument, but I can't quite follow it—why would there necessarily exist terminating programs which could be written outside of the putative language L? That is, is there any obvious reason why one couldn't have a language in which precisely the terminating programs were write-able?

Halting problem

That is, is there any obvious reason why one couldn't have a language in which precisely the terminating programs were write-able?

Your language would then be a "machine" that solved the Halting Problem.

In what sense?

It seems that, to use this putative language L as a machine to identify whether or not a program P halted, I would first have to be able to determine algorithmically whether or not P could be written in L. While it's clear that I could simply enumerate the class of write-able programs, it's not so clear to me that I can easily determine membership in this class. Am I missing something?

It's a valid point

I have the same objection to the argument as stated.

A largest terminating language L would not solve the Halting Problem. Solving the Halting Problem would be a program that can precisely select which programs can be translated to L.

A simple question

A largest terminating language L would not solve the Halting Problem.

Then how did you construct or find L in the first place?

Finding L

Then how did you construct or find L in the first place?

I assume you're asking how you'd know L has largest terminating semantics. I'd consider that to be a separate condition than having an L with largest terminating semantics.

Anyhow, the question isn't whether L can process any terminating program, nor even whether programs can be translated to L by an algorithm. The question is whether every terminating algorithm can be implemented in L.

The perils of computational thinking

The question is whether every terminating algorithm can be implemented in L.

... and only terminating ones. Both have to be stipulated, otherwise L's existence is trivial.

I assume you're asking how you'd know L has largest terminating semantics. I'd consider that to be a separate condition than having an L with largest terminating semantics.

I know that these issues are tricky to think about. (Which is why I think it is worth having this conversation.)

While it is true that in pure mathematics you are (often) allow to make distinctions like that, in computation, which is necessarily constructive, you can't. The whole notion of programs as proofs rests on this: to say that something is true is to show how to build it.

If you could produce a programming language L with the termination properties under discussion (all and only terminating programs expressed), you could also produce a halting oracle. This is just a basic fact of PLT, though I understand that it takes a bit of thinking about it to see that it is so.

Some of us have probably thought about it way too much by this point. ;-)

Not an oracle.

If you could produce a programming language L with the termination properties under discussion (all and only terminating programs expressed), [...]

Express-able, not expressed. And I'm not quite sure what 'program' means. Is the goal to express the most algorithms (i.e. in terms of bisimilar performance characteristics)? Or just allow some expression of all non-terminating functions?

In any case, language L wouldn't need to interpret all terminating programs as written in their original language. Rather, the terminating program P would need to be re-express-able as P' in L that maintains certain observable properties (certainly the input->output mapping, possibly performance, maybe various intermediate states).

While it is true that in pure mathematics you are (often) allow to make distinctions like that, in computation, which is necessarily constructive, you can't. The whole notion of programs as proofs rests on this: to say that something is true is to show how to build it.

Many proofs, including that involving the Halting problem, are non-constructive. They aren't performed by computation, but rather by modeling of computation and proofs over those models.

[...] you could also produce a halting oracle. This is just a basic fact of PLT, though I understand that it takes a bit of thinking about it to see that it is so.

Care to point me towards a proof?

As I understand it, even if I had L, and even if I had a proof that L has the desired properties, I would still not know how or even whether I could take some program P written in not-L and produce a corresponding P' in L.

What I would know is that (a) if I do manage to create P', then P must also terminate for all inputs. (b) if I can prove that P terminates, then I have also proven P' exists, even though I don't know how to express it.

That isn't an oracle by any definition I have for the word...

A simpler take

Due to the number of terminological issue that are coming up, I'm starting to feel that I would need to write the equivalent textbook to make sure we were on the same page.

(If anyone has ever wondered why theoretical texts spend chapters building up definitions this is why... ;-) )

If you are really interested in this stuff, I'd recommend tackling a standard computability textbook or two, a good type-theory book, and some reading on the foundations of mathematics and constructivism. Just some light reading. ;-)

But since that might not seem like a helpful suggestion, let me try to explain what I've been saying very directly and concretely, because, as I said before, I think this is a very important topic and harder to think about than people often suppose.

So here's what I suggest. Let's think about what we would have to do to define L starting from known Turing-complete and strongly normalizing (i.e. all valid programs terminate) languages. (This is basically an expansion and explication of Douglas McClean's one at the beginning of the thread)

If I start from a Turing-complete language (select you favorite one), I have to find some way to reduce the power of the language by taking away some primitives (possibly replacing them with weaker ones) or by adding a filtering mechanism (such as some typing regime) to eliminate non-terminating programs.

If I start from a strongly normalizing language (such as the simply-typed lambda calculus), I have to add primitives or relax type restrictions to add power to the language.

Now the danger with either approach is that I go too far: I could take away too much power from the Turing-complete language so that I was also eliminating perfectly fine terminating programs, or I could add too much power to the strongly normalizing one, making it Turing-complete.

So at each step along the path to building this language L, I would have to prove that the changes I had made to my language have not added non-terminating programs to the language, nor removed any terminating ones.

How could I check this? I would have to have some way to define exactly which programs that could be written in this language where terminating and which were not, so that I could show that I hadn't gone too far.

But any such method to prove or test the termination properties of the programs that I've added or removed would be equivalent to a solution to the Halting Problem, and therefore can't exist.

So there is no way to build language L by adding to or removing from any known language of any known computational power.

If you still don't find this convincing, I'm not sure what to tell you. If that is the case, let's just agree to disagree.

You skipped a step

In a strange sort of way, you just offered a reductio ad absurdumof a different kind for L by showing that you couldn't know if you could translate an arbitrary program into it or not.

But back up a step. How would you prove that L could express all and only terminating programs unless you could show that you could express all terminating programs in it? Since, as you point out, you can't identify all terminating programs in general (as a consequence of Turing's result) you can't prove that L has this property.

Which means that you can't build an L with this property, because if you could, the construction of such an L would be such a proof, and this would be a contradiction.

I hope that clears this up. ;-)

Not quite, Marc

L Spice wrote: [...] to use this putative language L as a machine to identify whether or not a program P halted, I would first have to be able to determine algorithmically whether or not P could be written in L. [emphasis mine]

Just because we could not provide an algorithm to perform the translation does not mean we could not know (non-constructively) that:

  • if P is terminating then it must be possible to express P in L
  • if P is expressed in L, then P is terminating

What would remain is the challenge of figuring out how to express P in L. After all, (1) no terminating algorithm will exist to translate arbitrary terminating program P to L, and (2) no terminating algorithm will exist to decide whether arbitrary (possibly non-terminating) program P can be expressed in L.

Oh, quite

does not mean we could not know (non-constructively)

OK. Then I have a proof that you might like.

Let L be the tooth fairy. Now, of course, I can't show that L actually takes teeth and returns money for them, because that would be impossible, even though that is the very definition of the tooth fairy.

But that doesn't mean that L doesn't actually do this, or that L doesn't exist. So L exists. QED

Incontrovertible logic, no? ;-)

...

...

Uncalled for

That was entirely uncalled for. Whether or not you are right, and whether or not you have extensive knowledge in this area, you haven't made you case in this thread. Your interlocutor has been entirely civil and at least moderately informed. If you know something he doesn't, you are free to spell it out, or provide a reference as he requested, or refrain from responding. Neither telling him to go read a "textbook or two", nor bringing in the tooth fairy, qualify.

Counter-example

Your language would then be a "machine" that solved the Halting Problem.

As much as I am loath to take sides with David in an argument [ ;) ], here is a simple construction of a language consisting of ALL and ONLY terminating (Turing machine, input) pairs:

Enumerate all such pairs in an arbitrary countable sequence. Arbitrarily choose a schedule for advancing the state of each Turing machine, one machine and one step at a time, such that eventually every machine on the list will receive arbitrarily many updates. As any (Turing machine, input) terminates, we assign it the next natural number. A program in our language then consists of a single natural number identifying a (Turing machine, input) under this assignment.

Every terminating (Turing machine, input) is eventually enumerated. Every program terminates.

Not so fast

here is a simple construction of a language consisting of ALL and ONLY terminating (Turing machine, input) pairs:

Do you not see that you have just defined a machine to solve the halting problem? That should be a clue that you have gone wrong somewhere.

Here is where:

Enumerate all such pairs in an arbitrary countable sequence.

Though these might be countable, there are not finite. You wouldn't finish iterating through the first round of updates.

The halting semi-problem

Do you not see that you have just defined a machine to solve the halting problem?

No, it is an enumeration of the program-input pairs that terminate. Since that set is recursively enumerable, there is no problem.It does give a semi-decision algorithm for the Halting problem, but just running the program gave that.

For the record...

Let us set aside this digression for a moment.

Is this really the contribution that Charles and Matt want to make to this discussion?

If they really think that dmbarbour and L Spice (and I'm sure others) will have a clearer understanding of the issues here by going down this path, let me know, and I'll be happy to cede the discussion to them.

For the record

The counter-example above is of absolutely no value as an actual language, and I'm quite sure it isn't what L Spice had in mind when he asked the question. But if you revisit David's post to you here, I think you'll find that the logical possibility of something like this counter-example was exactly the nitpick he was making (at least that's how I interpreted his post).

Finite recognition

But if you revisit David's post to you here, I think you'll find that the logical possibility of something like this counter-example was exactly the nitpick he was making (at least that's how I interpreted his post).

Ok, I can see how you're reading his comment that way, though I didn't. If I had thought he was just nitpicking about a topic he really understands then I wouldn't have bothered to respond to him. ;-)

Yes, from the strict definitions of computability taken by itself, the recursively enumerable definition you gave is a language in the formal sense, i.e. a potentially infinite set of finite strings with a finite generation mechanism.

When we talk about a programming language, however, we normally include the notion that the language must be finitely recognizable.

To spell this out for all and sundry, finitely recognizable means that you can always tell if a given finite string, i.e. program, is a member of the language or not in a finite amount of time. (This is normally totally independent of its termination properties when executed as a program.)

I'm sure you'll agree that the recursive enumeration you gave does not have this property, and that if you could pull of this trick such that the resulting language was finitely recognizable, it would be a solution to the Halting Problem.

I have been assuming that L is such a finitely recognizable language throughout the discussion.

Hopefully this added explanation will satisfy the nitpicks and not unduly confuse those who are really trying to grip this stuff for the first time. (Whether dmbarbour is one of these people or not. ;-) )

A simpler example

I don't think finite recognizability is the linchpin issue the way I phrased things (valid programs consist of an arbitrary natural number, which is easily recognizable). A simpler variant of this scheme would be to take programs to be (Turing machine, input, #steps) triples. Validating a program consists of running the machine for #steps on input and verifying that it is now in the accepting state. The point is that by requiring #steps - a certificate of termination - as part of your program, you've eliminated the need to check an arbitrary Turing machine.

Note that if, instead of modeling programs as denoting computations that yield a single answer, you model them as producing total computable functions N->N, then you can't have any computable language that is capable of producing all and only total computable functions.

Simplicity out the window ;-)

A simpler variant of this scheme would be to take programs to be (Turing machine, input, #steps) triples.

Are you familiar with Chaitin's work? He does something similar in his definition of Omega, the quintessential computably-defined but uncomputable number. Interesting reading, if you are so inclined.

Perhaps I'm misunderstanding you (I partially misunderstood you on your first pass on this, and I'm mostly re-interpreting you based on my reading of Charles' response, so this is entirely possible ).

However, it seems that you are saying that, because you have defined the output of your r.e. language generator to be the natural numbers, and since it is easy to tell when something is a natural number, then I can always just ask "Is this a natural number?" and then I know whether or not that "string" is in the language.

In a sophistic kind of way, I can see how this can be considered true, but there is an element of circularity to it, since your language won't be fully defined until the end of time. In a sense, your natural numbers aren't really a finitely recognizable computational language, but an infinite set of promissory notes that will be filled if you wait long enough.

The language that is doing the real work is the one in which you are encoding your Turing machine and the inputs (and the steps if you include those).

So let me refine my definition of "finite recognizability" to exclude this: a finitely recognizable language has a finite compositional basis. Hopefully, you will understand this concept intuitively, because I have a feeling that formalizing beyond reproach would take many volumes to nail down... Suffice it to say that any real PL would have this property, so I would declare the r.e. examples out of scope in the context of this thread. ;-)

Heading out the window

Are you familiar with Chaitin's work

I didn't recognize the name (for quite some time I had the bad habit of not noticing the names associated to things I read), but I actually have a couple of his books and am familiar with some of his work.

In a sophistic kind of way, I can see how this can be considered true, but there is an element of circularity to it, since your language won't be fully defined until the end of time.

Well, given a natural number N, you don't have to wait until the end of time to find the Nth Turing machine to terminate - that will happen after finitely many steps. I wasn't trying to engage in sophistry here... pedantry maybe :). I agree with your comment to Charles that in the spirit of the original question, this sub-thread has gone way off topic and is probably not productive, so I'm certainly not going to take up authoring those volumes nailing down your new definition. ;)

Exactly what I had in mind!

The counter-example above is of absolutely no value as an actual language, and I'm quite sure it isn't what L Spice had in mind when he asked the question.

Actually, I'm a mathematician, and (as Marc pointed out) I and my ilk are perfectly happy with (perhaps even prefer?) truths that are revealed by arcane and useless methods; so, in fact, you have answered precisely the question that I was asking. (In fact, if I may offer high praise, I think that this example is not only arcane and useless, but also beautiful!)

Oops

I actually got you confused with dataangel, the original poster. Sorry. And I'm a mathematician (by training) as well, BTW.

Dead end

To the forum topic: I think this is whole thread is a dead end, because to write a program that you know is terminating is to have a reason to know that it is terminating. The issue isn't expressing recursive programs, its expressing programs that can be shown to terminate. I think termination-proof-carrying-code, for a proof-theoretically strong language, is the best way to do that.

To this thread: every third post seems to be confused about the difference between recursive sets and recursively enumerable sets. Come on, guys! You're bright, you've probably all understood this stuff in the past, there's an adequate Wikipedia article available to refresh your memories...

Live beginning

I think this is whole thread is a dead end, because to write a program that you know is terminating is to have a reason to know that it is terminating.

Remember Charles that the original question was not about formal languages or termination proofs or even programs per se, but about programming languages and their power.

Some people, who quite reasonably have not spent most of their adult lives studying this stuff as others of us have, seemed to have had a legitimate confusion about a foundational issue. (Though maybe there were trolling cats among the pigeons... ;-))

I think it is legitimate and worthwhile to try to offer simplified explanations and point people in the right direction without having to recapitulate the history of computability and PLT. (Though in my experience, extremely hard to pull off around here...)

Fast enough

You don't iterate through all pairs in successive rounds of updates, though. As an example schedule, you could iterate through (machine, input) pairs 1 through N during round N. Eventually every machine gets arbitrarily many updates.

The reason this isn't a solution to the halting problem is that there is no effective way to start with an arbitrary (Turing machine, input) pair that does not terminate and realize that it isn't in the language.

Does this need a lemma?

Doesn't this assume that adding such a primitive wouldn't cause L' to be Turing-complete?

It still consists only of terminating programs

Since (I think!—maybe it's this subtlety to which you were referring?) only terminating programs can be written in the new, enhanced language L′, it's still not Turing complete.

It does not...

P was selected as a 'terminating program'. Adding a primitive to L that can execute exactly one more terminating program will not make L Turing-complete.

Yes but...

Can you know that you will always be able to add a primitive that only enables you to execute exactly one additional terminating program? My suspicion is that after a point adding new primitives will inadvertently jump you into full Turing completeness, even if very indirectly.

You can if...

You can be sure if the primitive you add is a top-level alternative, like:

Program ::= Normal-Program | Added-Primitive-1 | ...

If you were to add it as a new statement that could appear anywhere, I share your suspicion.

Is that even safe?

If we were basing this on say the lambda calculus, you can't write a free-variable-less term (a program) without it being a lambda, and thus taking inputs. Inputs mean your program is subject to changes in behavior. You could wrap any given program in a lambda with an unused parameter, but then you've just made your program consume a dummy input before it uses real inputs. You can do this ad infinitum, but eventually any such wrappers can be discarded by just supplying more and more parameters.

Unless your program was a lambda followed by arguments (an application). But then your arguments in turn must be functions that take parameters, which we could pass things into (any of the other 'top level' primitives) to change the behavior of.

It's not obvious to me you're safe.

Composition? Symmetry? bah!

The approach described by Douglas is a bit more like the esoteric HQ9+ language. You simply add a new program as a toplevel primitive that cannot be composed into 'Normal Program'.

Composition? Symmetry? bah! who needs them?

Don't assume that 'adding a new primitive to execute a new program' implies said primitive must fit in or compose with the rest of the language.

Not so fast...

Can the set of programs representable by successive L' ever reach the same cardinality as some yet-undiscovered-lets-us-express-'most'-recursive-programs language? IOW, sure it's infinitely big, but we have mathematical means for ranking the 'bigness' of infinities.

Strong theories

The stronger a mathematical theory is, the more recursive functions it can prove to be total. So something based on ZF set theory will fit best.

There's an article by Mike Gordon and Sten Agerhom, Experiments with ZF set theory in HOL and isabelle, that discusses some issues around the choice between set theory and type theory. The most successful tools in formal methods are based on type theory.

There was a discussion about sets vs. types a while back, in the What is the dual of { } ? thread.

Postscript — It's maybe not clear what I had in mind: programs are something along the lines of pairs consisting of, say, a PCF program and a machine-checkable ZFC proof that all unbounded recursions in the program terminate.

Re: Strong Theories

Is there a nice example of a general recursive definition that ZF set theory can prove total that HOL cannot? Bonus points, of course, for examples that are simple and/or more likely to be familiar to a programming audience. ;-)

Except for things that explicitly do not terminate (e.g. processes), one could argue that most functions that programmers care about are primitive recursive, at least in principle. Moreover the few aren't (like Ackermann's function) tend not to be difficult to establish termination. So a simple, widely familiar example seems unlikely to exist.

But this might be a cop-out because it is often simpler and more efficient to describe a function using general recursion. If you need set theory to prove the equivalence, you can't really say that HOL handles your definition, even if it handles the function it denotes.

Assuming that the Collatz function is total, do you have any wild guesses which camp it would fall into? Will HOL be sufficient for a proof, or do you suspect that ZF(C) will be necessary?

Boolean relation theory

I'm not sure exactly what the logical strength of HOL is: it's stronger than second-order arithmetic, but not much, I think.

Harvey Friedman has done pioneering work on finding mathematical phenomena that can't be grounded in the axioms of ZFC. My favourite is his work on Boolean relation theory; he has constructed a fast-growing arithmetic function using it that needs ZFC plus infinitely many Mahlo universes to be proven total. So there ZFC isn't strong enough, you need large cardinal axioms.

Collatz problem: if it converges, that can be finitely demonstrated: no special logical strength is needed to show it. My uninformed guess is that it is hard in a combinatorial way, not an analytical way: if it diverges, then that is a theorem of PA, but maybe one that can't be formalised in a nice way with the usual axioms of PA.

Language with self-evaluation:

I have read (somewhere) that a total language L cannot write a program that evaluates L. Is this true? It's been a long time...

However, one can easily have a total language that automatically stratifies self-evaluation. This would (for example) allow self-evaluation minus the 'eval' statement after some finite number of "stages".

One doesn't need to know the number of stages, only that there will be finitely many. Still, that seems to be one of those things that is easy to enforce and difficult to prove without enforcing.

Story

Thank you, Charles.

Thank you, Charles.

limits of non-TC

The reason L cannot evaluate programs written in L is because that Hofstadter 'strange loop' would be Turing complete and then the Halting problem says you have undetectable non-halting inputs. The ability of self evaluation is the basis of Turing's paper. This means that there is no upper bound because for any total functional language L guaranteed to terminate there exists some language L' which is also total and can evaluate L, though it can't evaluate L'. There is a lower bound as a non-functioning system can evaluate only the function f({}) -> {} taking no inputs and producing no outputs. However computers are not Turing equivalent either they are bound automata, a Turing machine with a finite strip, it likely isn't difficult to work out how to create a total functional language that is sized such that no implementation of it will fit on a given machine with your compiler. Just ratchet up the self eval count until the program size exceeds half of the usable memory capacity. Then every valid program you can write on the machine will terminate.

upper bound?

Intuitively, every program that terminates, does so in finite number of steps. Let's have a simple procedural language without recursion, with if/else and with bounded loop construct (loop (N)). Then every program in this language will terminate. I assume finite input, because infinite input makes no sense here. The interpreter's input is the program (finite) and its input (finite). If the interpreter can estimate the upper bound of the number of steps the program will take on its input in finite number of steps then it can also interpret the program on its input. If the program consists of n (known) statements the top-level loop will be O(n). The upper bound of every expression's outcome can be estimated in a finite number of steps depending on the size (known) of the expression's input. So for each loop(n) the upper bound for n can be estimated. For nested loops the ns are multiplied, for sequential the ns are added. So I think the interpreter can be written in this language. I don't know whether this language can express any program that terminates on finite input. But I think it can. Then I don't know where this is flawed.

Flawed, 2nd try

The evaluator needs some kind of controlled recursion and it is not possible to express Ackerman function in this language.

I still have hard time believing it is not possible to construct a language of a subset of total functions closed under evaluation.

Let's have a functional language with statically verifiable decreasing recursion. In this language if a function calls another function then it contains its definition or it is a recursive call, distinguished by a keywords Rn for any n. So if f calls g, it contains g's definition, and if g calls f, then it does so using R1. This way both the code and data is a finite tree. In analysing a function f(p1, .., pn) it must be possible to statically verify that for any recursive call Rn(q1, .., qn) the q1 < p1 || (q1 == p1 && (q2 < p2 || ..)) holds, where q < p if q is part of p (p was decomposed to q in the code) or if both are data D(p1, .., pn) and D(q1, .., qn) then the above/similar inequality holds. It is possible to express and verify the Ackerman function. I am not really sure it is possible to express and verify the eval function yet.

But if we assume it is possible then we can try to construct the evil function, which would be something like:

def evil(p) =
{
def eval(f, p) = // definition of eval
S(eval(p, p))
}

Now we have the code of evil (it's apple tree) and we can try to compile the evil(evil) program. Clearly the interpretation in some point does recursive call eval(evil, evil), which is not decreasing. I don't know yet whether it is possible to verify statically that evil(evil) is not well-formed, or whether it is just not possible to statically verify that evil(evil) is well-formed.

It is not very clearly defined and there are some uncertainties, perhaps someone can elaborate.

Basics: arithmetics and TMs

Marc wrote: I think it is legitimate and worthwhile to try to offer simplified explanations and point people in the right direction without having to recapitulate the history of computability and PLT.

I'm afraid the following, if it is simplified, isn't simple, but I hope it might still be useful. At least I've made an effort to explain myself.

In order to know that Turing machine (TM) e coding a function N→N terminates on input n, you need to have a bound on how many steps e(n) will take before stopping. That is, you need to know a bounding number. In the Kleene hierarchy, that is a Σ1 fact about the TM, input pair. If a TM doesn't terminate on the given input, then that is a Π1 fact about the TM, input pair.

In order to know that TM terminates on all inputs, you need to be able to provide a bound for each input, and there might be no constant bound. That means, you need to know a bounding function. In the Kleene hierarchy, that is a Π2 fact about the TM. If a given TM doesn't terminate on all inputs, then that is a Σ2 fact about the TM.

For arithmetic theories, which here I take to be theories of arithmetic that are consistent and incomplete in the Gödelian sense, the consistency sentence of the theory will be a Π1 fact.

Arithmetic theories can prove theorems in the Kleene hierarchy. In particular:

  • There are a set of Π1 sentences which tell you which other theories of arithmetic they can prove consistent. The size of this set is a good measure of how strong the theory of arithmetic, and in practice it is a total order on reasonable arithmetic theories. Call it the consistency strength;
  • There are a set of Π2 theorems of any arithmetic which tell you which TMs the theory can show terminate on all inputs. The size of this set, which we call the termination set, is another measure of strength of the theory, and it normally gives the same relation as the first measure, for good metamathematical reasons.

More facts:

  • If there is a TM that really terminates on all inputs and a given arithmetic can't prove it, you can get a stronger arithmetic by adding this fact to the axioms of the arithmetic;
  • If you add such an axiom to an arithmetic where the TM doesn't terminate, then the new arithmetic will be inconsistent;
  • You can construct a terminating TM by diagonalisation out of any consistent arithmetic that the arithmetic can't prove total.

This tells you that: no consistent theory of arithmetic has contains all terminating TMs in its termination set, but the infinite tower of arithmetics of higher and higher consistency/termination strength covers all of the terminating TMs.

It's a short step from here to see that there can be no language with decidable inclusion criterion that contains all and only the terminating functions. And also, I hope, to see why I emphasise the importance of consistency strength in determining the strength of a "semantics" of termination.

This is all not constructive, of course, but it is possible to give a quite similar constructive story to parallel it.

C on the Chomsky hierarchy

Perhaps it is interesting that any implementation of C can reasonably be classed in the Chomsky hierarchy as a regular expression language - a simple finite-state automata language. I'm not aware of any "interesting" programs that can't be written in C in spite of that limitation.

It seems to me that, pragmatically speaking, when we talk about languages being "Turing Complete" we don't really mean it[*]. We're talking about some other property that has a pragmatic resemblance to Turing Completeness.

-t

[*] Formally, some lisp and functional languages are probably Turing complete but I'm not aware of that formal distinction ever being used for any practical gain.

"reasonably"

Given that there are simple 5 line C programs that correspond to regular expressions that won't fit on your hard disk, I think 'technically' might be more appropriate than 'reasonably'.

re "reasonably" v. "technically"

We catch each others drift and have only terminological differences. I'm happy enough with either terminology for this.

-t

C is not a regular language

Are you saying that C's syntax can be reduced to a regular expression, or are you saying that programs written in C are no more powerful than regular expressions? Neither one of those is true. Regular expressions are powerful enough to turn a C program into a stream of tokens, but not powerful enough to parse that token stream.

Virtually all general purpose programming languages are Turing complete in the same way; there is no difference between LISP, C, Java, ML or Haskell in this respect. These languages are all dramatically more powerful than regular languages. The only programming languages that aren't Turing complete are things like SQL, or recent functional languages that only allow total functions.

Total functional programming

The only programming languages that aren't Turing complete are things like SQL, or recent functional languages that only allow total functions.

Although a quick search turned up some discussion of the idea of total functional programming, the only specific language I saw mentioned was Charity. Are there others?

Coq (the most widely used),

Coq (the most widely used), Agda (the most like other FP languages).

C is a regular language

I mean that program in C are no more powerful than regular expressions. The reason that this is true is that each implementation of C defines a (very large) fixed amount of mutable state available to programs. Depending on how you count that state is either the size of directly addressable memory of, if you include files and basic I/O, the addressable set of file space. Either way, it's finite. C programs are finite state machines, by definition.

That's a "toy" observation but the real question is whether or not language design could more usefully reflect that finiteness. The empirical evidence that all "interesting" (in a pragmatic sense) programs can be expressed in a finite state language - perhaps there are high level ways to exploit that fact in language design.

-t

The reason that this is true

The reason that this is true is that each implementation of C defines a (very large) fixed amount of mutable state available to programs.

This doesn't seem right. Isn't sizeof(void *) implementation defined according to the spec? In which case it is not the C language that places any restrictions on pointers or addressable memory, it is the compiler and the machine on which the program runs.

the true reason is true

Tricky but, there's an answer: what type does "sizeof (void *)" have?

As I said, each particular implementation of C is a regular language. The formal definition of C could be regarded as a countably infinite set of regular languages - each implementation is a regular language, though.

-t

Tricky but, there's an

Tricky but, there's an answer: what type does "sizeof (void *)" have?

I believe it's size_t, which is often "int", and also implementation defined. All primitive types are implementation defined and opaque, in every language. I'm not sure how C is different here.

Maybe I'm missing something, but are you saying that C is not Turing complete simply because the effectively opaque address type necessarily has a fixed size? Is this not a limitation of every language in existence?

We are none of us running real Turing machines since we have physical limitations, but the state space is large enough even for 32-bit programs that they might as well be, and exhaustive analysis of arbitrary programs is intractable.

C's trick (re "Tricky but...")

C's trick is to be explicit and reflective about the finiteness of its state. You literally can not write a valid C program that relies on infinite state (unless you make non-standard assumptions about the environment). In contrast, something like Scheme admits that implementations MAY limit the amount of consing that you do but also admits implementations with no limit. Scheme is formally turing complete but C is formally regular.

-t

Even on the call stack?

I ask this question out of ignorance of the details of the C standard, it's not rhetorical:

How does C (the language, not implementations) limit the size of the call stack? What prevents you from getting infinite state there?

call stack limit

You can take the address of variables on the stack. There are only a finite number of addresses to go around, in C.

-t

True, but

That's true, but you can't do so dynamically, can you? So if a function doesn't take the address of any of its locals, and doesn't call alloca (and doesn't call anything that does.., for definiteness let's say it only calls itself), what in the language would prevent unbounded stack growth?

I certainly grant that it is possible to observe the finite address space of the stack in C, but I'm curious how it is possible to rule out unbounded use of memory on the stack if programs are not required to make such an observation.

re "True, but" (very interesting)

Off the top of my head you might very well be correct. C might be a PDA language (chomsky-1) rather than a finite automata, in that formal sense you describe. Still well short of turing complete....

-t

By this reasoning...

C doesn't specify what happens when you dereference null, either, so perhaps you'll find Turing completeness there. ;)

I'm still not sure what the point of all of this is, if not to establish the practical irrelevance of the Chomsky hierarchy for those unwilling to play along and accept 2^(2^32) as reasonable approximation of infinity here.

why it (perhaps) matters

I'm still not sure what the point of all of this is, if not to establish the practical irrelevance of the Chomsky hierarchy for those unwilling to play along and accept 2^(2^32) as reasonable approximation of infinity here.

Three points:

First, while I concede that the addressable space of standard files is large enough to "count as infinite", the addressable space within an individual file or within memory is often palpably finite. Were it otherwise, interpreters written in C would not have garbage collectors; sort(1) would be a far simpler program; genomics programming would be vastly easier than it is; etc. C is often used as an explicitly finite language.

Second, the topic post asked (in vain) for the "most powerful" language in which all programs terminate. My observation about C is two-fold: (1) C is (essentially) a regular language language; (2) all interesting programs can be written in C. There is clearly some sense in which we don't need anything more powerful than regular languages (though sometimes we want Turing completeness, see point three).

Therefore, if guaranteed termination is the goal, instead of "most powerful" the question should really be "what makes a good PL design in which all programs are terminating finite automata?"

The replacement question has the virtue that positive answers can be proposed. :-)

Third, I see two distinct "use cases" for language design. On the one hand, if we're making a language for, say, interactive math exploration - formal turing completeness seems like a win. It's a good fit for the domain. On the other hand, if I'm writing a word processor or grep or an OS kernel or an implementation of the math language, resource limits and (in many situations) guaranteed termination are suddenly a lot more important. Let's call that second class "systems programming".

C is a systems programming language whose design is essentially a turing complete language, with lots of explicit limits tacked on until it becomes a regular language language (or perhaps, formally and accidentally, a (N)PDFA language language). That leaves the C programmer with a lot of burden recognizing when a piece of code might accidentally exhaust some finite resource. I wonder if there isn't the possibility of systems language as effective as C, but that handles its finiteness more helpfully.

For example, perhaps recursive functions are the wrong abstraction to be using. We might instead have first-class finite automata which can interact with one another. Closures wouldn't make sense in that context but perhaps there is an effective abstraction that involves closed over automata-with-free-variables. Loops as we know them might be needless as they could be replaced by circular transition graphs. That kind of thing.

-t

(1) C is (essentially) a

(1) C is (essentially) a regular language language;
This is wrong!
You can create an C program that read an infinite size word (getchar in a loop).
A regular language contains only finite size words.
So C can treat more than than regular languages.

there's a lot of confusion about "regular"

Claudio, I hope you don't take me to be insulting you. I just notice that in a lot of comments here, not just yours, people are very confused about what "regular" means.

It is entirely possible to supply input of unbounded length to a finite state automata. If you will, picture an automata with one state, a final state, which upon every new input character transitions to the same singular state. That machine is a finite automata. It recognizes a regular language. It basically is getchar in a loop.

Maybe someone can recommend a good book on the topic. I have what I think is a pretty great book on the topic but it is in the form of long-ago xeroxed pages of a particularly lucid professor I had back in college - it's not something I can easily share or point you towards.

The key thing is not that input or output is finite but that the state of the program itself is both finite and fixed in advance of any input being provided. That limits what kinds of computations a program can perform in a very specific way known as "regularity".

If you take a regular machine but give it a way to read and write some source of infinite storage, you leapfrog all the way up to turing complete. In between there are finer distinctions to be drawn.

-t

Terminology

OK, I think I understand what you mean, but my problem is that your terminology is unusual. In the wikipedia:

a regular language is a formal language (i.e., a possibly infinite set of finite sequences of symbols from a finite alphabet) that satisfies the following equivalent properties:
* it can be accepted by a deterministic finite state machine
..
* it can be described by a formal regular expression. Note that the "regular expression" features provided with many programming languages are augmented with features that make them capable of recognizing languages which are not regular, and are therefore not strictly equivalent to formal regular expressions.
..

For me, "C is a regular language" sounds like "I can rewrite every C program as a formal regular expression", and since formal regular expression do not have infinite loops, this is false.

[edit. just to confirm, you mean that C programs can be represented by finite automata with input and output streams and possible no final state? ]

answer to "Terminology"

just to confirm, you mean that C programs can be represented by finite automata with input and output streams and possible no final state?

Yes. Now, just to complicate things.... as someone called me on in another message in these threads my account doesn't cover the possibility of a C implementation that admits an infinite call stack so long as the program running on it doesn't produce an observably unbounded number of addresses of stack variables. Formally, then, one could argue that C is a language in which programs are PDFAs or NPDFAs rather than (N)FSAs. I think that that's a good observation and it appears to be true but it also appears to be an accident. Suppose we imagine, say, the C99-spec with an explicit limit on stack depth. That would be hard to formulate, especially in the language typical of C standards but suppose we could. I don't think anyone would regard the language as being changed so much as clarified so I think the apparent formal (N)PDFA-ness is not important - it's a programming language for writing "regular" (chomsky-0) programs.

I suppose I am using terminology in a slightly non-standard way but also in a way that is widely recognized and true. The customary definitions of regular languages in terms of "finite strings" and so forth are really just bad but accepted notational conventions.

-t

Call the men in white

I almost certainly need my head examined for wading back into this thread (hence the title), but if I take my previous statement that it is useful to clarify misunderstanding seriously, I guess I have to, so here goes...

it's a programming language for writing "regular" (chomsky-0) programs

Chomsky-0 is normally used to describe unrestricted grammars, i.e. Turing-complete, so I'm not sure if you made a typo for Chomsky-3 or if you are using "regular" to mean ordinary.

Since you otherwise seem to be arguing for the weakness of C in the hierarchy, I'll assume the former.

I think there is a misunderstanding here based around the term "finite state" in FSA, which might be better described as "no memory" automata.

A regular grammar can generate infinite strings, e.g. the longest string generated by "A -> aA". The thing that makes it regular is that it can only branch in one direction. (Usually on the right, but left linear grammars are also possible).

To say that C is equivalent in power to a regular grammar is to say that it couldn't output strings with matching nested braces, for example, which clearly it can.

C is Turing powerful without having to rely on infinite resources to do so.

(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((...

To say that C is equivalent in power to a regular grammar is to say that it couldn't output strings with matching nested braces, for example, which clearly it can.

But if memory bytes are addressable with 32-bit pointers, and every value is addressable, then you can "only" at most write a C program that matches 2^(8 * 2^32) parentheses. This is what one of the other branches of this thread has been about - finite memory implies finitely many states, but so absurdly much state that I still don't see much value in viewing it as such.

At infinity, everything is possible

then you can "only" at most write a C program that matches 2^(8 * 2^32) parentheses

That is mostly a moot point. (Or perhaps a sophistry in our earlier terminology. ;-) )

A productive way to think about this is to realize that the Chomsky hierarchy is as much a hierarchy of complexity (in a related by slightly different sense than P, NP, etc.) as it is of computability.

The different levels are restrictions on the kinds of patterns that can be generated(or recognized) at each level, or if you prefer the degree to which the different patterns can be compressed into certain kinds of description.

Mostly agreed

I agree with the spirit of your comment, but I would say that the point is technically correct and that a finite memory machine is regular in the Chomsky hierarchy. Moreover, moot or not, I believe this is exactly the point at the center of Tom's position in this thread. Similar reasoning shows that every C program is either non-terminating or O(N). The point is that when the constants hidden in these abstractions become this obscenely large, the abstractions cease being useful approximations for what's going on.

O(N) or O(1)?

If the finite state argument holds (I'm not sure it does, for the reasons discussed up thread about the possibly-infinite nature of the call stack in the absence of certain primitives), then wouldn't every C program be either non-terminating or O(1)? It seems to me that it has to finish in a constant maximum number of steps or repeat.

O(N)

I was assuming the model that I believe Tom was advancing, where a C program can read from an input source.

It probably doesn't make sense to ask whether a C program is formally an FSM, because C programs don't fit into the framework in which FSMs are usually defined. Here's a variant of FSMs for which the correspondence is stronger (though still not formal):

Divide FSM states into two kinds: those that transition to other states based on input and those that have only a single epsilon transition to another state. The former model reading from the input device while the latter model computation steps. This type of FSM can get stuck in an endless computation loop, corresponding to non-termination.

Edit: Oops, I read your post too hastily - I thought you were referring to the concern about FSMs not being able to represent non-termination. I don't really get the concern about the C language standard possibly requiring an infinite call stack. If the standard is written that way, then there has never been a conforming C implementation, and I'd consider it a bug in the standard.

All f:N->N are total and O(1) with infinite memory

To back up Douglas, and to show how the kind of logic some have been using in this thread goes astray, consider a system with infinite memory.

Let f be an arbitrary function from N->N. Since we have infinite memory, we can just construct the graph of f (i.e. the set of domain->range pairs) in memory. As a result, for any input n, f is defined, since n itself is finite, and can be reached in finite steps, and hence f is total.

If there is no computational cost to the lookup, the range is available in one step for any n, since you just have to take the right member of the pair, so f is O(1).

If you include the cost of lookup it is O(N), since you just iterate through the domain until you get to the n you want.

So all f:N->N are total and O(1) with infinite memory. QED

Hopefully, everyone will see that this proof is absurd. What is missing is this: where did the range values come from? For any given f, we need to compute them somehow, and to really be a given f, say the digits of π rather than the digits of e, they have to follow some meaningful pattern. (In a more technical way, I believe this is the point that Charles was making earlier.)

So yes, if you restrict yourself to finite memory and are allowed to pre-compute all the results into one big dumb lookup table, yes all programs are Chomsky-3.

But that would be kinda cheating, no?

Proof is valid

Hopefully, everyone will see that this proof is absurd.

No, the proof is valid. The absurdity would be attempting to apply this proposition you've proved, which has the hypothesis "allowed to pre-compute all the results into one big dumb lookup table", to any real machine.

Pre-computing results really is an effective way to get O(1) responses in the real world, but leaves open the question of how you precompute the results to begin with. Requiring an infinite precomputed table is going to be problematic in building a real machine. Note: "real world" Turing machines do not actually require an infinite tape - at any given step they only require a finite tape. A Turing machine would only ever use an unbounded amount of tape if it was diverging.

The real question I have is why you think this is analogous to the type of reasoning I've employed elsewhere in this thread. Here you've presented a proposition that, while true, doesn't apply to the situation at hand.

O(N) or O(1) & the f:N->N objection

I think that these claims are formalizable (if we assume that the "possibility of infinite stack depth" is unintended and not really true):

Between every pair of input or output steps, a C program is only capable of performing a computation with that theoretical complexity of O(1). In that sense, every C program is formally O(N) in the number of inputs and outputs it produces.

However, that's obviously not a very useful measure of complexity and that's where Marc Hamann's point comes in. In practice, the typical state machine that defines a C program's behavior between input and output states is vast - in practice it's construction costs matter. In the general case, the number of states is sufficiently large that completely compiling the program to an O(1) form is impossible and so the automata is lazilly constructed and evaluated during execution. That's analogous to, but not the same as, Hamann's notion of a RAM machine with infinite memory that contains a pre-computed look-up table: it's the cost of initializing the memory that is of interest. Correspondingly, although parts of C programs that don't do I/O could be regarded as formally O(1), the run-time we talk about is in essence the construction time for the relevant parts of the state machine.

-t

The other analogy

Ok, thanks, I see the analogy that Marc was making now.

However, that's obviously not a very useful measure of complexity

I brought up the Big-Oh result precisely because it's another example of a technically true result that's not very useful. Do you not concede that every C program is regular in the same way that every C program is O(N), and that neither of these observations is particularly useful for analyzing C programs?

finite languages and complexity

. Do you not concede that every C program is regular in the same way that every C program is O(N), and that neither of these observations is particularly useful for analyzing C programs?

Yes.

However, I'm not sure that that isn't a useful observation for the design of some future, regular, systems programming language. (I'm not sure it is, either: I'm agnostic about it.)

-t

Manifesto

I would say that the point is technically correct and that a finite memory machine is regular in the Chomsky hierarchy.

There is an old saying: "If my grandfather was a woman, he'd be my grandmother."

I think that the logical content of this sentence is true in exactly the same way as your statement is true.

This is not just a matter of "practical" vs. "theoretical". Even as a theory, computation is a strongly constructive logic, and most of the misunderstandings in this thread come down to using some other more permissive logic to reason about computation.

My experience is that pure mathematicians aren't very happy when you try to enforce the limitations of computational reasoning on them, which is their right. However, I think it is equally reasonable to require, in a forum such as LtU dedicated to the language aspects of computation, that we stick to a computational form of reasoning.

And in that framework, many of the "technically correct" arguments that have arisen in this thread are constructed without giving due consideration to the substantive content of computational theory.

I will grant that it can be fun to play around with these ideas (hey, who doesn't like a nice uncountable infinity now and again?), but as someone else noticed, LtU still has a reputation to uphold. ;-)

Shrug

I don't quite know what to make of your post. It seems like it would be a more appropriate charge if my role in this conversation had been to initiate this argument about C programs being regular. But I didn't. So what response should be given to an argument advanced that is "technically correct", but doesn't "give due consideration to the substantive content of computational theory?" My opinion (perhaps I am biased based on my math background) is that you acknowledge the point as technically correct and provide the extra context. I'm personally not an expert on complexity theory, but I'd love for someone to jump in with results about the computational complexity of the solving the halting problem for space bounded Turing machines. I'd agree that such results are more appropriate for analyzing C programs.

If we have a reputation to uphold, surely we do that by getting the details right as well as providing context.

Open hand

My opinion (perhaps I am biased based on my math background) is that you acknowledge the point as technically correct and provide the extra context.

No. If you accept the strictures of computation within which most of these concepts were defined, these arguments are false, plain and simple.

If you decide to translate them into a logical system that allows you to leap over computational steps, or ignore computational intent so that they are formally true there, well you are welcome to do so, but I would still consider it incorrect within the context of computation (and by extension LtU).

To me it would be akin translating from biology to quantum mechanics: the heart is the same as the lungs because they are both made up of quarks. This is "true" in a system which allows these mappings, but the biologist won't be too impressed, since all the meaningful structure of his system has been stripped away to allow the mapping.

There is nothing personal in this: this stuff is tricky to keep straight and sometimes subtle. But I think it is important enough to be worth going over.

What strictures?

If you accept the strictures of computation within which most of these concepts were defined, these arguments are false, plain and simple.

Ok, I disagree with you here, then. To every finite memory machine accepting a language L, there exists a finite state machine accepting the same language. The converse is also true. I'd need to define "finite memory machine" to make this precise, but this result is then very much in the spirit of the works of Turing and Chomsky.

At a meta-level, I don't know what logical system you're talking about. The language underlying of formal computer science is mathematics. We may occasionally study logics where only certain types of reasoning are allowed, but those assumptions get put into the hypotheses of the theorems we prove - they don't restrict how we prove things.

CS is not (all of) mathematics

this result is then very much in the spirit of the works of Turing and Chomsky

I don't think I can agree on either point.

Turing proved that all f:N->N are not total, i.e. most are uncomputable.

Chomsky was ultimately interested in studying formal systems to understand the structure of human language, which he started by assuming is finitely generated, not looked up.

The language underlying of formal computer science is mathematics. We may occasionally study logics where only certain types of reasoning are allowed, but those assumptions get put into the hypotheses of the theorems we prove - they don't restrict how we prove things.

I think this is a very common point of view (that there is only one monolithic and true "mathematics" and "mathematical proofs"), since it is how we are taught to think about math from elementary school up, and CS is often introduced as an extension of this ideology.

After studying proof theory and the foundations of mathematics, as well as computation, for many years, though, I disagree with both these statements.

I think CS is best described with a restricted form of the language of mathematics, with computation itself (in different guises perhaps) as the model and proof theory.

You're welcome to disagree with me, but you will end up with a whole bunch of "valid theorems" (like the one I gave that I think is computationally absurd) that don't have any sensible model in computer science.

CS is is CS does

Turing proved that all f:N->N are not total, i.e. most are uncomputable.

1. How would he have proved that if he'd only ever considered computable things?

2. There's nothing infinitary about the proof of the result I just mentioned and it would be very easy to carry out the conversion from finite memory machine to finite state machine in code. The difficulty is only the exponential blow-up in size.

I think we're rapidly approaching (or maybe long past) just-drop-it time, though. I would disagree that any of the reasoning in this thread is out of scope for LtU, though it's certainly been excessively belabored.

In practice, this idea works in theory

When it is reasonable to approximate a number as infinite certainly depends on the context, and the amount of memory available today certainly does not seem infinite for many practical problems. But the number of states in a modern computer system is 2 to the power of the number of bits in the system state (vastly more states than there are electrons in the known universe). So, from the point of view of considering a C program as representing a finite state machine, the number of states is effectively infinite.

I see this reasoning as a bad hybrid of theory and practice: In practice, machines have finitely many states, so we can always treat them as finite state machines - in theory.

how do you explain practice, then?

I don't follow you because the C programs I see most often call 'free()' or contain garbage collectors, or struggle to build the largest useful hash table that can still fit in addressable memory and other such things. Absolutely none of that would be necessary if the state space were so large as to be effectively infinite.

In practice, much of the best systems code I know is explicitly designed and built as finite state machines both to make the program easier to analyze and to best manage finite resources. I'm not sure any interesting systems program exists which can't be very nicely implemented in that kind of style.

I think your confusion comes from too narrowly understanding what comprises the "state machine" of a C program. I do not mean only the instructions generated by the compiler - I mean the state of the heap and standard files, too. The way C is designed, the heap might sometimes feel like "infinite memory" but its often used in the contrary spirit of being a lazily constructed finite state machine that exhibits sufficient regularity in its structure that its structure can be described by an even simpler state machine - the instructions generated by the compiler.

-t

In practice, C programs aren't FSMs

I'm explicitly not arguing that a C program has access to virtually infinite memory -- obviously this is not the case for many practical programs. My point is just that the number of states the machine can be in (viewing all of those bits as defining one gigantic number) is so large that it doesn't make sense to view the system as an FSM.

That system code you've worked with is, almost assuredly, not defined as a simple finite state machine consisting of only simple states and transitions. Each state will be parameterized by some amount of heap data, which will be manipulated by the state machine controller core and used to select transitions. All of this extra parameterization multiplies together to yield a very large number of actual implicitly defined system states, which is what makes viewing the system as an FSM impractical. I suppose I'm accusing you of missing the very point you just accused me of missing :).

In practice, many (especially good) C programs *are* FSMs

Let's suppose, for example, that I write a version of unix 'sort'. For "short enough" files it reads the whole file into the heap, sorts it, writes it back out. For longer inputs, it generates temporary files along the way, merging them to produce the final output.

In a very solid implementation, the program is very careful to only allocate up to some limit of heap space (rather than crashing on a failed allocation). It's even careful about limits on things like the number of concurrently opened files and so forth.

There are, in theory, some input streams "too large to sort": in the abstract sense, the algorithms used by this version of sort are formally an automata - but that's less interesting than the mental model the author has of the code.

The mental model for code like that is very much a finite automata, albeit one with some uniquely characteristic regularity to its structure. The "state" of the program can be regarded as the compiled code plus the state of the heap plus the state of any temporary files. The transitions among all possible states are defined, of course, by just the compiled code part.

You can regard the C source code as a kind of macro language for specifying the full "digraph" FSA and you can regard the execution model as lazily constructing portions of the digraph as needed.

When I've written programs very similar to this description of sort, in genomics especially, that is how I thought about them: as a kind of "meta-FSA" in the C code interpreting the running FSA over the finite capacity of the heap and allotted files. When I read things like other shell utitilities, network stack code and the most interesting network services, and so forth - being in that same mindset when reading some of the most successful code very often pays off. I don't think we usually bother to spell things out in quite this way but I'm certain a lot of the best C programs are designed from this perspective.

Another great example of a regular language language is the ol' HP-41 series.

-t

That helps, but

Thanks, that explanation helps me better understand the other comments you've made in this thread. However, it's not clear to me how much of the simplicity of analyzing FSMs carries over to analyzing the "lazily generated FSM" structure you've described. I'm pessimistic. If the "lazy FSM" simply factors into a small control FSM and some tag-along data, then I'm sure you can get quite far using standard analyses. But as the interaction between code and state gets more intricate, I think it will get harder to usefully apply techniques from FSM theory (for example, in deciding non-termination). In the case of an arbitrary C program, I'd wager the problem is intractable, and thus I'm still unconvinced that viewing a general C program as an FSM is a useful thing to do.

For some purposes, they are

Model checking programs rely on finite state. Compare with SPIN and the specification language Promela. For each bit of data the number of bits needed to represent this is explicitly specified (typically some number that admits all interesting executions). While Promela is not C, the same principle applies.

IIRC, the set of explored states is not always represented explicitly, instead a Bloom filter is used to record visited states during the exploration.

State Space

"State Space" is the number of states a "state machine" would require. If you have N bits of state, then you have a (potential) state-space of size 2^N, minus a lot for illegal or invalid states. This means it could require a finite state machine of size up to 2^N to describe the whole state-space.

Simple systems - especially configurations - can often be built as finite-state machines... but only at a "high level". I don't think you can find many interesting systems programs that can be programmed implemented in purely finite state machine style, especially not when issues such as program composition and modularity get involved.

abstractions (re state space)

For very complicated and large finite state machines, like a unix sort program, evidently you need an abstraction mechanism and lazy instantiation of the FSM. C serves that role and my suggestion in this multi-party conversation is that that role might be served better but in any event, FSM's are sufficient.

-t

I mean that program in C are

I mean that program in C are no more powerful than regular expressions.

C programs are certainly more powerful than regular expressions. Regular expressions - even of the non-deterministic sort - are guaranteed to terminate on finite input. C programs are not.

The fact that we have finite space resources for our C programs does not restrict the programs from consuming infinite time.

C is technically more powerful than a Turing Machine. C can involve itself in asynchronous non-deterministic input and output, which is something a Turing Machine cannot do.

regular languages and termination

You are assuming too many needless details in your formalization. Think about it more simply:

A finite state machine, deterministic or not, is a directed acyclic graph of states that consumes an input tape and can, upon arrival in designated states (or upon the taking of certain transitions) emit a fixed output. You can build one that loops forever. You can run one on an infinite input tape.... it doesn't matter, it's all Chompsky-0.

-t

... those needless, petty details

You are assuming too many needless details in your formalization.

Those petty details, the ones other people feel are standing in the way of your pithy ideas, will just go away if you ignore them, right? Right?

I'm of the impression that such distinctions as between "non-terminating computation on finite input" vs. "non-terminating computation on infinite input" are, in fact, relevant in a discussion of the Chomsky hierarchy.

And you now say 'Chomsky-0' (which is the Turing Complete level) whereas before you were saying 'regular languages' (Chomsky-3). Ah... but why bother with details?

non-petty details

You insult me and then go on to be either meaningless or wrong depending on how one interprets whatever you mean by "relevant". I can't help you, sorry.

-t

I apologize...

I lashed out in irritation.

Your earlier claim that C is no more powerful than regular expressions is, even in the presence of finite state, incorrect by any formal analysis. Regular languages are no more powerful than NFA and will always terminate with a fixed set of reachable states after any finite input. You cannot make such an assertion for C.

Your main claim is based on your intuition that C programs can only read or modify a finite amount of state. But C programs may also read the state they modify and thus logically extend a finite input tape into an infinite one. This may be an infinite cycle, of course, but even that's beyond the power of a regular language. There is a huge difference between knowing that a program will process all input and terminate up to input vs. allowing the program may never terminate after having only processed a finite input.

My intuitions fail me far too often for me to trust your intuitions on this subject. That you consider formal distinctions to be 'needless detail' makes it difficult for me to even take your claim seriously.

r.e. and termination

It's often customary to define finite automata such that they always halt on finite input but it isn't necessary. For example, you could define a kind of NFA that produces output upon taking certain epsilon transitions as a way to generate all strings (including some of unbounded length) in a given regular language.

Another view that might satisfy you is to regard the CPU clock as a source of infinite input for the kind of finite automata you like.

C programs, can, as you say, modify their state but they can't build an "infinite tape" unless you make non-standard assumptions about the environment. We could imagine /dev/turingtape from which a tape cell can be read or written with ioctls for moving left and right and then C would be turing complete but it's no longer simply C that we're talking about, or even C with standard files.

Finally, I agree that intuitions are tricky in these areas as is noticing all the details about C. As someone else pointed out, so long as a recursive set of functions does not take an unbounded number of addresses of local variables, there is no formal limit on the depth of recursion and so C is a PDA language (or NPDA if include threads). So I was wrong about something, just not what you accused me of being wrong about.

-t

Doubts

I don't agree with your line of reasoning. You don't need /dev/turingtape. What about /dev/tty? I only need 4 operations: move-left, move-right, read-bit and write-bit. This suffices both to acquire input and to operate on an unbounded storage space, and does not require any extensions to standard C: getchar() and putchar() are enough.

Of course the entire machine with RAM, disk, etc. is a finite system. But all Turing machines are finite systems, they just have an infinite tape. It's not clear to me why you want to insist that the tape be "inside" the machine, in some sense. The tape is never "inside" a Turing machine, it's always out there "in the world."

You might object that the machine can't keep track in finite space of its position on the "infinite tape" of /dev/tty. But this applies to all Turing machines (which have a finite number of internal states by definition), and does not limit their universality.

C + "infinite tape"

You don't need /dev/turingtape. What about /dev/tty? I only need 4 operations: move-left, move-right, read-bit and write-bit.

Standard C does not offer any /dev/tty with infinite read/write storage.

It's not clear to me why you want to insist that the tape be "inside" the machine,

You can contrast C with, say, Scheme. Formally, in Scheme, you can cons forever with only the caveat that actual implementations are permitted (not required) to give up from exhaustion at some point.

That's one reason why I wonder (in another reply here) if, for example, recursive functions - obviously a good fit for Scheme - are right for a systems language like C.

-t

I disagree.

Standard C does not offer any /dev/tty with infinite read/write storage.

AFAIK, this is simply not true. The infinite storage is not "in" /dev/tty. The infinite storage is in the real world, and the C spec makes no claims about the memory of the person sitting at the console.

To be clear, I only need to be able to write five distinct characters and read two. I need to be able to do so a potentially infinite number of times, but there is nothing in the C standard that prevents a program from calling getchar() and putchar() an infinite number of times.

You can contrast C with, say, Scheme. Formally, in Scheme, you can cons forever with only the caveat that actual implementations are permitted (not required) to give up from exhaustion at some point.

The comparison with Scheme indicates to me that you still don't see what I'm saying. I am not saying that the C standard allows an infinite amount of internal storage. I'm saying that the C standard allows I/O to a potentially infinite system through /dev/tty. And this system is the tape. And this is the way that Turing machines have always been understood. The machine itself is not required to have unbounded storage. It is only required to do I/O with an unbounded external system, which the C standard allows.

PS. I think I agree with your broader claims about the preferability of less-than-Turing complete languages. I just don't buy your argument about C.

But...

... it's not a Turing machine (at least in the sense of having Turing completeness) unless it has access to an infinite tape, and C certainly does not guarantee anything like that. I don't think Tom would disagree that C + an infinite tape is Turing complete.

What is the definition of C?

What is the definition of C? getchar and putchar are both in the standard. In looking at the spec, I don't see any glaring limitations to getchar and putchar that would obviate them from being the required "infinite ticker tape" (except of course for actual physical implementations, but we're just dealing with the spec). Unless you don't include standard library functions as part of the specification of C.

putchar / getchar

Would do just fine if C guaranteed they could store and recall arbitrary amounts of data. I don't have the spec in front of me, but surely it doesn't guarantee any such thing.

[Similarly, as I noted in my post earlier, I disagree with the idea that C is a PDA - the stack space is not guaranteed infinite.]

To make this concrete: C99

To make this concrete: C99 Standard. Section 7.19.1 deals with the definitions for stdin and stdout, 7.19.2 for streams, 7.19.3 for files. Streams have positioning information of type fpos_t, described as "an object type other than an array type capable of recording all the information needed to specify uniquely every position within a file."

Section 7.19.9 gives the definitions for the file/stream seeking functions, all which accept fpos_t values. C places no restrictions on the size of files or the length of streams, and positioning information seems completely opaque from what I can see.

Thomas stated, "something like Scheme admits that implementations MAY limit the amount of consing that you do but also admits implementations with no limit." I don't really see how C does not provide this same property given the above definitions.

the fpos_t escape

naasking asks if the unspecified nature of "fpos_t" in C99 does not create a formal escape by which standard C programs may, after all, have access to an infinite tape.

Not quite.

You get stdin, stdout, and stderr. None of those is "read/write".

You get regular files that are read/write but those are limited in size to one of C's integer types.

Can C be *extended* so as to be turing complete using the fpos_t mechanism plus a non-standard "infinitely long read/write file"? Sure. But then we're no longer talking about C and we're far, far gone from talking about how C is actually used.

-t

You get regular files that

You get regular files that are read/write but those are limited in size to one of C's integer types.

Not according to C99. Perhaps in an earlier version of C, but the definitions I gave apply equally to files and streams.

naasking's fpos_t challenge

Just show me some portable C99 code that opens (aside from environmental limitations) an infinite, relative-seekable "tape".

I think you are saying that *if* we augment C with a way to open such a file or stream then it becomes turing complete and fpos_t is a handy mechanism, but that is not the same as saying C is turing complete.

-t

Just applying your own statement

I think I'm just applying your own statement:

something like Scheme admits that implementations MAY limit the amount of consing that you do but also admits implementations with no limit

to C, namely:

something like C admits that implementations MAY limit the size of a stream/file but also admits implementations with no limit.

So how is this mapping incorrect?

[Edit: in a rush, so forgot about the C code you requested. I don't see why opening any old r/w file would not suffice to meet your criteria, since again, C admits implementations that may limit file size, but also admits implementations with no limit]

I agree with you...

...but I don't think that's what Tom was saying. Tom seemed to be saying that standard C precludes the existence of an infinite tape, not merely that it doesn't guarantee it. For instance, he writes above:

You literally can not write a valid C program that relies on infinite state (unless you make non-standard assumptions about the environment).

I do not think that getchar()/putchar() constitute non-standard assumptions, and you can certainly write a valid C program that treats them as an infinite tape.

The C standard is perfectly compatible with both assumptions, although it is not compatible with infinite internal storage, for a variety of reasons. So yes, I think I agree with you, but if that's what Tom meant, I misunderstood.

Tom seems committed to the view that the tape must be internal to the machine, and not implemented via an I/O assumption. But this is in direct contrast to the normal conception of the Turing machine, which uses the tape for both I/O and intermediate storage, has no other provision for I/O, and has a decidedly finite number of internal states.

clarification about C vs. infinite state

Tom seemed to be saying that standard C precludes the existence of an infinite tape.

That is a misunderstanding. One can not write a portable program that relies on an infinite tape given just standard C, the standard C library, standard threads, and so forth. I mentioned a hypothetical addition of "/dev/turingtape" elsewhere in this discussion so looking at that might help further clarify my take on that.

It would be pretty absurd to assert that a C program can't possibly run in an environment that affords an infinite tape (or similar) so I'm not sure why you'd think I said so. The sentence you quoted is taken from a fairly clear context and put in a false light.

-t

Finished

And my point is that /dev/turingtape is not required. getchar()/putchar() provide all the necessary semantics: the ability to read and write to an infinite store. The standard remains wisely silent on the behavior and abilities of the human user.

Furthermore, the context of the sentence I quoted was obviously not clear, or I would not have quoted it. Personal attacks are not called for.

As it happens, I feel like you're not even reading what I'm writing, while you obviously feel the same way about me. So at this point, I'd say we're talking past one another, and I'm finished.

sorry for offending Matt

Bah, sorry. Sure, let's leave it but I certainly didn't mean to and didn't think I was just glossing over your statements and blowing past them.

Regards,

-t

Sorry as well

And I'm sorry I got frustrated... I do believe you were not blowing past my statements, and I didn't mean to and didn't think I was quoting you out of context, either. Anyway, I don't think the issue is worth spending more time on one way or the other. In the end, I think our disagreement is relatively shallow, while our agreement is relatively deep. Thanks.

you could define a kind of

you could define a kind of NFA that produces output upon taking certain epsilon transitions as a way to generate all strings (including some of unbounded length) in a given regular language.

Eschewing output, an NFA with epsilon-transitions can easily be turned into an NFA without epsilon transitions via a very simple algorithm:

  1. perform a transitive closure for reachability over the epsilon transitions after an input
  2. then remove the epsilon transitions

By no means does this allow an NFA to be non-terminating.

Now, you say to put a (non-epsilon) output on an epsilon-transition for the NFA. I doubt you could still call the resulting machine an NFA, or still claim the resulting language to be 'regular'.

Another view that might satisfy you is to regard the CPU clock as a source of infinite input for the kind of finite automata you like.

What... so we can say that C is 'terminating up to the current CPU cycle'? Nice property.

If time is the input

If time T = {q} is the input alphabet of the "computing world" finite state machine consisting of all bits we control then yes, C programs can be seen as finite state machines.

But it's a "toy" observation.

re "time is the input"

No, my claim is more than that.

Addressable memory and, if you want to throw in files, addressable file space in C are both formally finite. A C program has no robust way to read and write an arbitrarily large amount of state. The amount of state it can read and write is very, very large for reasonable values of sizeof(X) for various X's, but the amount of state is most definitely finite.

My claim has nothing at all to do with "who controls what bits" in the sense that I take you to mean.

-t

toy again

I meant "we control" in a sense of "are available to us". It doesn't matter who and how controls them, important is that there is only finite amount of them. The values of all those bits represent a number, and this number is one state. Having consumed enough time input, it will move to another state. Obviously every computation we do is embedded in this finite state machine. Since the state transitions appear somehow arbitrary and having seen the Matrix, I used that formulation. (Sorry if this clarification is useless.)

The finite state machine (btw. not acyclic) needs an input in order to move to another state. But push-down automaton doesn't. As was said before, even with finite memory available, the push-down automaton (and Turing machine) can run forever, without input. Given finite memory, it will eventually repeat the steps, but it can run forever.

You used the real world limitation of finite space to argue that C programs are finite state machines. But in the real world we can control devices with C programs, e.g. to turn on and off the LED, in a loop, forever. You can't do that with a finite state machine, unless you feed it with infinite input.

"infinite input"

The simplest answer is probably "Regard the CPU clock as a source of infinite input." However, as pointed out elsewhere, C is actually a PDA or, with threads, NPDA language.

-t

one "interesting" C program

I'm not aware of any "interesting" programs that can't be written in C in spite of that limitation.

One simple C program recognizes an input consisting of balanced parentheses, using recursion. That's a classic example of a non-regular language. The C standard supports such a program, and does not require that it blow up in any particular way.

Many "interesting" C programs likewise are designed as performing tasks that are beyond regular languages.

You ask whether high-level features could support the finiteness of state. Maybe you consider fixed-size pointers such a feature. I regard support for total functions to be one of the more promising paths in that direction.

On my job, the ability to compile a program to fit into a fixed set of resources would be valuable. Usually, though, the flexibility to adapt to larger or smaller sets of resources is even more valuable.

re the "interesting" C program

Given the definition of C and your program that recognizes balanced parens, I can prove that every instance of the program will yield bottom for a sufficiently deeply nested set of parens. So, your C program is in essence a regular expression for recognizing a very large but distinctly finite set of balanced paren sentences.

As for "high level features [supporting] the finiteness of state" I have mostly half-formed, not-ready-for-airing notions about what can be gained in that area. I work on it, from time to time.

-t

re the "interesting" C program

I'm not sure that's true, if you use recursion to match the params, as I don't believe the C standard mentions a maximum stack depth

terminating semantics... question not dead yet.

Will anyone volunteer to offer up an example of an "interesting in the real world" programming problem which can not be solved by writing a solution in a language that has guaranteed termination semantics?

Mr. Hilbert is specifically instructed to not offer his usual answer here as Mr. Goedel has amply demonstrated that Mr. Hilbert's customary question is no longer terribly "interesting in the real world" although we fully acknowledge and acclaim both gentlemen for helping us refine the question, even if we are forced thereby to make it a tad subjective. Mssrs. Turing and Church are also so advised by the customary morphisms.

What interesting, real-world program could not even in principle be written in a language that confined program writers to terminating semantics? We acknowledge that "interesting" is subjective and encourage interesting debate on what "interesting" means.

-t

My question here is partly inspired by a thread from a foundations of math mailing list in which it was observed without objection that none of the well known, hard, interesting open problems in math are in any way indicated to be permanently undecidable.

terminating semantics... question not dead yet.

Surely any program that monitors some external system can't be written in a language that is guarenteed to terminate? Say a program that counts the number of alpha particle emissions of a piece of radioactive material in a time interval, and plots the result? Or a seismograph controller? How would you write that in a program guarenteed to terminate?

Terminate 'up to input'

Surely any program that monitors some external system can't be written in a language that is guarenteed to terminate?

Guaranteed termination semantics are typically "guaranteed to terminate up to input". That is, it's a guarantee that the program will eventually consume all input and terminate.

For a program that monitors external systems, we probably want an even stronger guarantee: termination up to input with realtime constraints.

terminating semantics... question not dead yet.

Surely any program that monitors some external system can't be written in a language that is guarenteed to terminate? Say a program that counts the number of alpha particle emissions of a piece of radioactive material in a time interval, and plots the result? Or a seismograph controller? How would you write that in a program guarenteed to terminate?

Which language?

My opinion: Any useful program terminates up to input or up to output (based on demand). And any useful non-terminating program is simply a terminating program tied to a non-terminating input or a non-terminating demand for output.

The issues of non-terminating programs (where non-termination is not tied directly to an input or output stream) are manifold. Among them is process control. We can get away today with non-terminating languages only because we have the power to pull the plug on them. That power is no longer readily available when we start dealing with self-healing programs on a distributed overlay.

The problem is creating a language in which many useful terminating programs can be conveniently written without accidentally introducing non-termination, and without costing overly much in terms of performance, robust composition, security, safety, modularity, etc.

That is, you mention: writing a solution in a language that has guaranteed termination semantics, but which one? For any given language with guaranteed termination semantics, there will be many useful, terminating programs that may not be written in that language. Further, there will equally be many programs that are non-terminating by virtue of external input that cannot be written in that language.

If a syntactically non-terminating language allows composition with 3rd party provers for safety, termination analysis, etc. then one might be able to tackle the problem from both sides: write the programs and extend the provers until they are capable of proving termination for your program (and hopefully many similar programs). I think that's the best one could do, practically.

dmbarbour: interesting claim about guaranteed termination

For any given language with guaranteed termination semantics, there will be many useful, terminating programs that may not be written in that language.

That's a very interesting claim. I've never heard anyone explicitly make that claim or defend it before. Can you elaborate / defend that claim? I don't see it as intuitively true. My "not dead yet" comment that started this subthread was meant to hypothesize the opposite (that there can exist a guaranteed termination language in which all "interesting" programs can be written, where "termination" is understood relative to input and output as a previous comment in this sub-thread notes.)

-t

That's a very interesting

That's a very interesting claim. I've never heard anyone explicitly make that claim or defend it before. Can you elaborate / defend that claim? I don't see it as intuitively true.

Useful programs that cannot be written in a terminating language L include the evaluator for L and evaluators for any equally expressive language. This isn't intuitively true (unless your intuition has been rather heavily influenced by Gödel).

There is a gap in implied knowledge between 'largest terminating language' and 'terminating language with guaranteed terminating semantics'. The latter will always be smaller than the former because we cannot formulate a proof (a guarantee) for termination that precisely matches all halting programs.

terminating semantics

Will anyone volunteer to offer up an example of an "interesting in the real world" programming problem which can not be solved by writing a solution in a language that has guaranteed termination semantics?

The way this is written the answer is trivially 'no', since for any terminating program there exists a language with guaranteed termination semantics which can express that program. I think what you're after is to fix a language that's very powerful and general purpose (like taking programs provably terminating in ZFC) and then ask whether there are interesting programs that cannot be expressed.

Also, I don't think you can dismiss Godel that easily. If L is a language that can express all interesting programs, then that fact alone would seem to make an evaluator for L interesting as well. Though, as I've described in this thread, I think there hatch through which one can escape from some of the "conclusions" that have been drawn from this argument.

against: "total functional languages can't include eval"

The claim carried over from that other thread is that a total functional language can not contain it's own "eval" function. The proof there is a constructive proof that relies on a Goedel numbering of programs. So, "eval" is defined as type "Nat -> (Nat -> Nat)". The construction defines "evil :: Nat -> Nat" as "evil code = 1 + (eval code code)" and it thus then follows that "evil 666" (where 666 is the encoding of evil) reduces to "1 + evil 666", a clear contradiction.

That's a neat little proof, sure, but I don't think that it applies here at all.

The problem is that that proof assumes without justification that our terminating language must include the infinite type "Nat".

In other words, that proof begs the question. It proves as lemma, in effect, that a total functional language that includes a Nat type can not include its own eval. It assumes that a total functional language must include a Nat type. It concludes from that assumption that no total functional language can include its own "eval".

I'm not interested in total functional languages in general I'm interested in the possibility that there exists a (practical) total functional language in which all "real world interesting" programs can be written - and, sure, that includes writing "eval" for the language itself. I see no reason that Nat must be a type in such a language.

-t

Restricting Composition

'Nat' may be construed as describing a program (i.e. code is long string of bits representing a Nat). Unless you restrict composition of the language such that only programs of up to a [bounded] finite size may be described, the above generalization should hold.

The notion of eliminating 'Nat' from the language is equivalent to saying you wish to eliminate the language's code representation from the language. That would make writing an evaluator rather difficult.

interesting programs are bounded in size.

Unless you restrict composition of the language such that only programs of up to a finite size may be described, the above generalization should hold.

I'm quite comfortable with such a restriction. Is there a "real world interesting" case of a program that shows I'm foolish for having that comfort?

The physical conditions of life mean that we're definitely limited to finite-sized programs so I don't see why a programming language definition has to pretend otherwise.

-t

Which size?

Which size do you choose? Surely you'll never need more than 640 kB of source-code, right?

Is there a "real world interesting" case of a program that shows I'm foolish for having that comfort?

Eclipse. Firefox. Operating systems. Any program where functionality primarily comes from extensions makes somewhat questionable your assertion that you can reasonably finite-bound the size of source-code used to describe programs.

The physical conditions of life mean that we're definitely limited to finite-sized programs so I don't see why a programming language definition has to pretend otherwise.

We can assume programs are finite (similarly, each Nat that 'evil' produces has a finite representation), but "restrict composition to a [bounded] finite size" means there's a finite maximum size for source-code chosen before looking at the source-code. I don't see why a programming language definition should be bothered to choose a size. Do you?

The problem with making a choice here is that, no matter what you choose, you're wrong. That's the nature of choosing an arbitrary limitation. If you ever encounter the limitation, it will introduce discontinuity spikes in further development of a program. If you never encounter the limitation, then it may as well have never existed [indeed, fans of constructive proofs, like Marc, might assert that by their logic the limit doesn't exist].

Languages should probably support process accounting better than they currently do, allowing us to make temporal and spatial guarantees about programs as a whole, ideally without sacrificing safety (safety is a goner if any operation can be halted asynchronously due to timeouts or memory limits - violating protocols for sensors and actuators is a very bad thing). When it comes to resources want utilization, efficiency, and the ability to both predict and dictate consumption for any given operation or composed set of operations.

But I can't see any reason to assert any particular resource limits within the language definition.

In practice, it works in theory, again

I should have guessed your objection, as disallowing Nat (in favor of some bounded integer type) is in the same spirit as your other posts in this thread. In fact, C probably fits the bill for your always terminating language that includes all "interesting programs", if you add a rule that C programs which execute for 2^{bits of machine state} steps automatically terminate with an error code.

But in practice, theoretical termination by itself isn't a useful property to have if what you're really after is closer to "terminates within my lifetime."

terminates "shortly" versus "before heat death of the universe"

But in practice, theoretical termination by itself isn't a useful property to have if what you're really after is closer to "terminates within my lifetime."

Well, we're onto a separate question now. My question was whether we can agree that no interesting program requires a language that has non-terminating semantics. You seem to accept, at least informally, that we can agree about that. Now you raise a new question: can we have a language where "termination in a practical amount of time" can be guaranteed for some subset of programs and, then, are there any "real world interesting" programs that are not members of that subset. Good question.

-t

Here's an easy one...

Goedel's completeness theorem for first-order logic means that there is a semi-decision procedure that, given a formula of FOL, will eventually terminate with a proof, if the formula is indeed true. Basically, you just enumerate the proofs, and completeness guarantees that if it's true, it's provable, and so eventually we'll hit the proof of the formula. However, this procedure won't necessarily halt when given a false formula, and in fact it isn't possible in general for FOL.

As a result, we can't write this program in a terminating language.

It's fairly trivial to write

It's fairly trivial to write a program in C that will calculate pi or root 2 to an infinite level of precision (continuously producing output until it is terminated). How might one produce a finite automaton to do that? How about testing for primality?

A C program is able to match parenths up to 2^(amount of available memory) - in that respect, c is a type 1 or linear-bounded automaton (very different to a state machine, which you seemed to have it confused with). It can simulate a universal turing machine with a finite length tape, or lisp, or any other programming language, or any CPU, so it is computationally equivalent to each of those.

Infinite

It's fairly trivial to write a program in C that will calculate pi or root 2 to an infinite level of precision

Not infinite - the number of digits it could output is bounded by 2^(bits in memory) again, for the same reason as with matching parentheses (since by the irrationality of pi the digit sequence is non-periodic).

Not quite finite

There is nothing in the C standard that requires the contents of a file-descriptor to be addressable by lseek(2). So as a result it is possible for a C program to generate an infinite sequence, it just requires a file-descriptor that never sets errno to ENOSPC from a write(2) or returns 0 from read(2).

/dev/urandom, /dev/null, or a pipe/FIFO will do nicely.

C is not an FA, it is at least a PDA and probably a bounded-TM; but my grasp of automata theory isn't up to the task of deciding which.

Wrong issue

Matt is noting that, because irrational numbers like pi and root two are acyclic in their digital representations, there must be a unique state in memory for each digit produced. A repeated state would indicate a cycle, which mustn't exist.

There are at most 2^(bits-in-memory) unique states in memory. One will, sometime well after the heat-death of the universe, eventually exhaust this state-space and need to either fail in error or continue in error (i.e. repeating digits).

So, theoretically, one cannot produce infinite-precision for pi or root two, at least not that easily.

If one leverages the HDD through the filesystem, memory-mapping, etc. then one might be able to achieve infinite 'bits-in-memory' by assuming a growth in available HDD space that is more rapid than the consumption of digits for pi. That is, neither Matt's nor Thomas's analyses take into account the temporal deltas in the potential memory-space. They assume a fixed memory-space once the program starts.

Not that any of that matters in practice. With an efficient encoding for the algorithm, it wouldn't take much memory to ensure one won't run out of states before heat-death of the universe when printing digits of pi or root two.

The actual consumer, be it /dev/null or whatever, was not relevant to the analysis.