What is a type?

There's nominal typing.

There's structural typing.

But what if an object's type is simply the aggregate of its behavior?

I'm curious if there are other type systems that work in this same manner. Comments and criticism welcome.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

session types

Session types are very useful for describing the surface behavior of an object or process.

The meaning of 'type' depends on the context of a 'type system'. I think it will be easier to define 'what is a type system' than to define 'what is a type' without context.

Behavior is usually the interaction of multiple types.

If you're approaching it from an OO perspective, it makes sense to think of types as "some stored raw information, plus some set of behaviors that access, change, or are affected by that information." That's the usual breakdown in classes on OO programming anyhow.

What seldom really comes out as more than "oh that's a special case... " or "well, when that happens you do this adhoc thing that isn't included in the materials we actually teach..." is that behavior is ALMOST NEVER DEFINABLE in terms of only one type.

Even the most plain-vanilla examples of "pure OO style" hardly ever define operations that work on a single type. They define operations that specify interactions between types. When you're defining a new object, the methods you write, unless the ONLY other arguments are members of the same type, are all interactions with other types, including "native" types like various kinds of strings and numbers, and "constructed" types including other objects and records and subranges and vectors and tensors of other types, etc etc etc.....

Attributing the operation that defines the size of some array class to an 'array' header makes every bit as much sense from a pure-logic perspective as putting it in a 'numbers' header. Because that's an operation on both.

Likewise, if we want our arrays to have 'special' behavior if used to store any object class that has a 'collect_garbage' method, that's an interaction between the array class and a set of other classes that may be completely unknown to the programmer at the time he's making the array class. Does it make sense to define this behavior as belonging specifically to the array class?

So... A type is defined by its behavior ... but its behavior is interactions with other types, which are also defined by that behavior.

So if you want to know what a type is, you're going to find its definition, probably scattered across the code for itself *AND* the code for dozens of other types. And even then the definition will be incomplete in most cases unless you know some additional things about those other types.

We talk about separability and abstraction. But that's not what we actually do. We try to break it along the fault lines that leave us with the definitions easiest to think about, and then because the stuff on the other side of that fault line is fairly simple in most cases, we can pretend it isn't part of our object.

But sometimes we have to make behaviors that deal with two, or even three or more different kinds of complex objects. And that behavior that just as much part of the definition of all those objects. We are encouraged to organize our code in ways that ignore this and just pick one class that this behavior "belongs" to. It's a poor fit.

singleton types

What seldom really comes out as more than "oh that's a special case... " or "well, when that happens you do this adhoc thing that isn't included in the materials we actually teach..." is that behavior is ALMOST NEVER DEFINABLE in terms of only one type.

There is a style of programming unique to proof assistants and dependently typed languages where the programmers mostly write types, and the IDE auto-fills implementations based on search and heuristic strategies. Essentially, an edit-time compiler.

There is a community of programmers who develop in this style. I follow quite a few on twitter. It's peering through a window into a wonderland. I've never had the opportunity to try this out in my work.

Of course, these types must be sufficiently precise to fully describe the desired behavior, or at least refine the behavior such that there are very few valid implementations. Arguably, one could say that the 'types' have become the new programming language. And, of course, these languages inevitably support types of types, and functional abstraction of types.

In any case, it's difficult to say what cannot be done without having proven it, and the act of attempting to prove it might open one's mind to some unfamiliar ideas and systems. What you can say is that, today, most of us aren't programming in a style where types uniquely constrain behavior, and doing so isn't feasible in most PLs.

Definition of the term "behavior"

So... A type is defined by its behavior ... but its behavior is interactions with other types, which are also defined by that behavior.

So if you want to know what a type is, you're going to find its definition, probably scattered across the code for itself *AND* the code for dozens of other types. And even then the definition will be incomplete in most cases unless you know some additional things about those other types.

We definite the term "behavior" specifically to mean: the set of method signatures that compose the type.

And many (but not all) types are nominal, so in the case of a nominal type, the identity of the type can be used in place of the following-the-graph approach that you described.

In practice, thus far (the last few years of working in this language), this has worked out quite well.

Why are types tricky?

Types are an old concept, originating in linguistics and spreading to logic, philosophy and programming language theory. It's not surprising that types are a controversial concept for philosophers, because of the nature of philosophy, but they are usually not so in linguistics and the foundations of mathematics. So why do programmers have issues with them?

I'll start by agreeing with David: types are artefacts of type systems. A type system is a more or less formal system of collections that is usually heterogenous and types describe the collections in that system.

The thing about programming is that it is an activity that is generally done before execution, but we want to have a confident understanding of what happens during execution. Thus programmers are concerned with their constructions at at least two stages: first as expressions, as they occur in the programming languages, and as values, describing the lifetime of the runtime constructions during execution.

The problems, then arise from the fact that we are thinking about two systems of types and the relationship between them. Linguists could have the same problem: they are interested in both types in grammar and in the conceptual system, but because they care so much more about the former than the latter, chaos doesn't break out.

The oppositions dynamic/static and structural/nominal, are really patterns governing the relationship between these two (or more) type systems.

All good points ...

We did not differentiate between the type system, per se, and the types themselves. It is a single model of thought, that simply introduces more than one term. The goal of the model, though, was to simplify the constituent building blocks of the type system, to simplify the proofs of correctness and to provide the means for arguing something to be correct, or invalid.

Simple answer

But what if an object's type is simply the aggregate of its behavior?

Then there is no way to statically and locally prove the type's behavior is confluent and terminating, because you would have to aggregate all behavior before determining the answer. There is no type checking, as such.

That said, if you are interested in this approach, you can look into co-algebraic systems, which define a result in terms of its behavior.

One middle ground is to use various workarounds like gradual types and metaprogramming to do neat tricks like peephole optimizations. In this way, the code has a template for behavior that can be dynamically specialized at code-write time by decisions made by the client caller rather than the API provider.

However, in practice, many engineers found this to be unnecessary with modern Hindley-Milner type inference systems, because you can always use the "Crate" trick to say there exists some type W such that for all W it has property X. And then you just parameterize X to get late bound behavior. And the client caller literally doesn't have to say what W is, because the type system automatically infers its true. Such tricks are what underpins Erik Tsarpalis' F# pickling library that allows for safe, functional, performant SerDe (serialization/de-serialization). And then you can formalize this Crate trick into something like type classes, such that you are not checking types, but rather the soundness of a kind of types.

And then suppose you want to do the opposite - you want the flexibility to let every client have their own behavior. There is a trick for that, too, called finally tagless interpreters. John de Goes has some fantastic presentations on this concept and the pros and cons. But the key insight is that the pros are all superior to phrasing things in terms of co-algebraic formulas, and it has none of the cons of needing to compute the transitive closure of all behaviors to get the sum total of what the type is.

So, I get the interest in defining things in terms of their behavior. I thought it was a cool idea in my 20s, as well, and spent a lot of time writing about it. But I realized it was mostly my obliviousness to learning a few simple tricks that lead me down this sacrilegious path.

Good references!

Excellent ... I shall have some follow up reading to do!

We designed the type system based on the behavioral model, but we only have to enforce it in that manner for duck typing; all other types are resolved into (boiled down) nominal types, so while the principles are still upheld, the mechanisms are generally far simpler.

We designed the type system

We designed the type system based on the behavioral model, but we only have to enforce it in that manner for duck typing

Well, the counter-argument to what I stated, "it's not needed", is that it is needed :P

In theorem proving, it is useful to specify co-datatypes to avoid manual constructions of infinite properties like infinite depth of a tree with finite breadth. For such a definition, you would ideally want co-datatypes and datatypes that can be mixed co-recursively.

That said, I thought I would first steer you towards Hindley-Milner tricks in general, and then we could talk about _real use cases_ that motivate co-datatypes. I think, from a nominal type system, it is very hard to think about the real world motivations for naming things effectively in a mathematically certain way. But when you start talking about proving certain structural properties of your types, the benefits get a lot more interesting.

Is there a difference between Hindley-Milner and Duck typing?

In terms of expressiveness, isn't Duck typing just a way of implementing Hindley-Milner typing?

Or is one an implementation strategy and the other an analysis strategy?

People talk about Hindley-Milner like it's a strategy of implementing type systems, but it's only a way of checking that something conforms to a type system. What we mean to talk about is the set of type systems that take full advantage of the advantages provided by Hindley-Milner style type checking.

There are dozens of different ways to express types that take full advantage of everything Hindley-Milner checking provides, and I think that Duck Typing is one of them.

Of course I'm the guy who'd cheerfully go a step further and provide a language with latent types, with aggressive H-M type inference, and then provide compile-time warnings for any variables bearing runtime type annotations that the checker couldn't eliminate at compile time. So in my mind it's purely a checking and type inference tool, and expression of the language's type rules is a separate question.

Neither here nor there

In terms of expressiveness, isn't Duck typing just a way of implementing Hindley-Milner typing?

Or is one an implementation strategy and the other an analysis strategy?

As I understand it, they're orthogonal concepts.

Duck typing does not use any inference chain to determine whether a term can be supplied as a given type. Duck typing uses the currently known set of properties, methods, events, etc. to determine whether a term can be supplied as a given type. The operative phrase is "currently known", and there are different ways to implement the meaning of that phrase. However, the most common is run-time type checking. There are also extensions that allow an object to add more methods at run-time, called prototyping. In general, run-time type checking is by definition compile time undecidable.

Hindley-Milner typing is an algorithm for automatically inferring a type, and whether a term can be supplied for a given type. Typability and type checking is decidable.

There are dozens of different ways to express types that take full advantage of everything Hindley-Milner checking provides, and I think that Duck Typing is one of them.

Clearly, therefore, Duck Typing does not "take full advantage of everything Hindley-Milner provides", or mathematicians would use Duck Typing for proof specification.

There is something important about the idea of co-definition.

Any time you have a method that specifies an interaction between types, a method that implements that interaction is a fundamental part of all of the types. If you define frobbing_array.frob(integer) as a method that's a crucial part of the definition of a frobbing_array:

frobbing_array.frob(integer)->fixtype

Then something cannot be a frobbing_array in your program unless there exists an implementation for 'frob' that takes that thing as its implicit argument.

But also something cannot be an 'integer' in this program unless some implementation of frob can accept it as its explicit argument with any frobbing_array.

And any implementation of 'frob' that attempts to return something other than a 'fixtype' doesn't count in determining whether these needs are met.

So that method, some degree, is part of all three class definitions.

'Bignum has not implemented the 'Integer' interface because:  
The Frobbing_array interface declares frobbing array.frob(integer) AND 
no implementation of frobbing_array.frob accepts bignums."

Makes just as much sense as

"Fondling_array has not implemented the 'Frobbing_Array' interface because: 
The Frobbing_array interface declares frobbing_array.frob(integer) AND
the Bignum interface declares that it  implements the 'Integer' interface AND 
no implementation of fondling_array.frob accepts bignums."

God I wish real compiler error messages made this much sense. Pointing out this kind of missed subsequent definition would prevent a genuine problem that would be likely to lie incipient for a long time and then bite someone in the butt.

Algebraic or Axiomatic abstract data types

There are some old papers by John Guttag & others that define a data type in terms of how its operations behave by the way of a set of rules or "axioms". Thus for example, one can define the behavior of a stack in terms of is-empty, top, pop & push operations. So for example top() or pop() fail if is-empty() is true. And the fact that a stack is a /generic/ type falls out naturally. Seems to me, one ought to be able to extend this to concurrency and communication (and even capabilities). One practical application of this would be to use these rules to define a complete set of unit tests, provided the specification is complete.

Seems to me, one ought to

Seems to me, one ought to be able to extend this to concurrency and communication (and even capabilities). One practical application of this would be to use these rules to define a complete set of unit tests, provided the specification is complete.

It was tried at Microsoft Research, and the researcher even got it adopted by the Common Language Runtime that powers the C# language, and then he left for Facebook.

My whisperers told me that the idea actually was extremely difficult to extend to concurrency and communication, at least in a backward compatible way with the CLR execution model. For that reason, and the fact only that Microsoft Researcher really understood how the tool worked, Microsoft supposedly ditched it.

References?

Here is one of the early papers that talks about axiomatic definitions of abstract data types.

My whisperers told me that the idea actually was extremely difficult to extend to concurrency and communication, at least in a backward compatible way with the CLR execution model. For that reason, and the fact only that Microsoft Researcher really understood how the tool worked, Microsoft supposedly ditched it.

I would appreciate if you can share any references or researcher names to this work. A websearch didn't reveal anything.

Code Contracts

https://www.microsoft.com/en-us/research/project/code-contracts/#!publications

Manuel Fahndrich

If you search GitHub you'll find verification of it being dropped from the CLR as part of .NET Core (now .NET 5 + ) with some hints as to why.

Not quite the same as axiomatic ADT

Thanks for the reference.

This seems to be similar to Bertrand Meyer's design-by-contract (implemented in Eiffel), with subroutine pre-/post-conditions and class invariants. In an AADT the axioms are separate from its methods. For example.

// s is of type stack<T>, x is some element of type X
is-empty(newStack()) == true
is-empty(push(stack, x)) == false
pop(push(s, x)) == s
top(push(s, x)) == x

There can be more than one implementation for some stack<T>. The compiler can then verify that all such implementations satisfy these axioms or generate code to do so at runtime or can be used to generate unit-tests if so desired. Or in case of concurrent access, add code automatically to allow for the possibility that another thread may make a pre-condition true. Could be very useful for using buffers correctly!

This would fit in quite well with languages that provide interfaces.

In some sense a much simpler version is already used when specifying generic types such as an orderedTree<T> in that objects of type T must be comparable.

It may be that this technique is not easy to use for anything more complicated (or hasn't been tried on any realistic examples).

types are constructive predicates

A programming language is a constructive, reflexive, categorical theory -- which is a much simpler concept than it might sound at first.

In what follows, informally think of "type" and "category" as synonymous. I'll split hairs on what makes them different later.

A category is a collection of objects defined by a rule of object construction.

Here is an example of a definition of a category (called Game);

We can create an object just by asserting axiomatically that it exists such as:

The object written as {|} (pronounced "zero") is an object in the category Game.

We can trivially construct an object from earlier constructed objects:

If G is a previously constructed object in category Game, then the objects written {G|} and {|G} are in the category Game.

If L and R are previously constructed objects (possibly the same object) in Game then so is {L|R}.

All such rules of construction must be computable in a finite number of steps. I.e., every object in cateogry Game can be derived from just the rules of construction, applying the rules a finite number of times.

(The particular category Game is borrowed from J. H. Conway's theory of games.)

Categories, once defined in this manner, are themselves mathematical objects and new categories can be constructed from them by defining objects called "functors". A functor is defined in terms of an earlier existing category, which we can call the "pre-image" of the the functor, and a finitely computable mapping from the pre-image to a category we can call the "post-image", and a rule of computation that maps each element of the pre-image to an element of the post image. For example, here is a rule for a functor we can call LeftOption:

LeftOption({|}) is the object RightWins

LeftOption({L|} is the object L

LeftOption({|R}) is the object RightWins

LeftOption({L|R}) is the object L

We could prove trivially, for example, that every Game is an object in the post-image of LeftOption and that RightWins is also an object in the post-image of LeftOption.

In other words, the definition of the functor LeftOption satisfies the rule for constructing a category, and so there exists a category we could call PostImage(LeftOption).

(In the actual game theory of J. H. Conway, we would also define a similar RightOption which maps {|}, or {L|} to an object LeftWins, and every object {|R}{L|R} to the object R. The idea is that sets L and R in the object {L|R} model the next moves available to players "Left" and "Right". Conway's formalism is that when a player has no mmore moves available to them, they lose.)

You can see how this kind of math might model computation in general. We begin with a finite number of primitive objects, such as {|}, RightWins, and LeftWins. We can construct other objects from earlier objects using finite computations. Every object is a member of one or more categories (for example, {|} is an object in categories Game, LeftOptions, and RightOptions. Categories are defined extensionally meaning that if we can prove that two separately constructed elements contain exactly the same elements, then they are in fact a single category that happens to have been given two names.

You can start to see how categories come out as programming language types here. You could, for example, define a category "int32" of length-32 bit strings, various functors for basic aritmetic, and so on. The technical details can be a bit tedious (for example, it takes two functors, composed, to define an operator like "+".

It's worth noting that the way we construct functors in this system satisfies the rules for constructing a category. In other words, it is possible to construct a "category of all functors".

Functors model "all the programs that we can prove halt in finite time". They are good model for machine instructions or for single-step operators in a programming language, but obviously -- since we can construct every functor -- not every program (which after all might not halt) is a functor.

So in this formalism, then, what is a "program"?

First, we define "computational step semantic" as follows:

Assume that MachineState is a previously constructed category. Informally, objects in MachineState represent the state of a running program, all future input to the program, and all past output from the program.

Assume that Halt is a previously constructed category, with the property that no object in Halt is also in the category MachineState

A functor, F is an object in the category StepSemantic if it's pre-image is MachineState and every object in the post-image of F is provably either an object in MachineState, or an object in the category Halt (it can't be in both, by definition).

Every object in category StepSemantic is "a "program". We can formally use the name Program as a synonym for the category StepSemantic.

So what is a "programming language"?

A programming language is a functor whose pre-image is MachineState and whose post-image is a category whose objects are objects in StepSemantic.

Conceptually, a programming language is defined by its interpreter, which is represented as a functor.

I said at the outset to think of categories as types, but that I'd split some hairs on that equivocation later. Here we go:

If we equivocate types and categories, our rules of construction guarantee that we can enumerate every object of a given type. There may be infinitely many objects of a given type -- but every object in the type will appear, eventually, in that enumeration.

The problem is that we can't always tell if an object is not of a given type, if we define "type" that way.

For example, it is straightforward to construct a category of "all programs that halt after finite steps". Yet, it is impossible to define a category of "all programs that do not halt, ever".

This implies that since we define the category / type Program as a category mapping objects in MachineState to objects in StepSemantic, that whether or not a given functor is in the category Program can not, in general, be decided. In other words, fully general type checkers would be impossible to write.

We can refine our definition of "type" to match the commonly accepted use of the word in programming by defining it this way:

The category Judgement has two members called True and False.

A "type" is a functor that maps all earlier constructed objects to Judgement

In other words, a "type" is an arbitrary predicate that can always be computed in finite time.

Conjecture: Every use of the word "type" in every programming language defined heretofore can be modeled as a "type" in the category-based formalism given above. (This is probably trivial to show by enumeration but there is probably a more elegant way to prove it.)

And, yes, every type implicitly defines a category (objects for which the given type predicate is true), and any "behavior" that can be proved for all objects of a given type applies to each object of that type.

(There are many interesting ways to further develop a foundation such as the above, by the way, including denotational semantics,
reflective towers, and the addition of categories axiomatically (and physically realistically) of categories defined as unique enumerations of random bits -- what Carl Hewitt calls "unbounded non-determniism'.)

Sorry but unbounded nondeterminism is more powerful.

I don't mean to pick old scabs but Carl Hewitt was objectively correct that unbounded nondeterminism is more powerful than turing machines (when both are physically realized / approximated).

The categorical sketch I give above, about what a programming language is and what a type is, suggests the following extension.

To the above we add a rule of construction that states a category can be defined as a unique, infinite, sequence of random bits. As many such random categories can be constructed as desired. It is impossible to prove that the identity functor for a given random category is equivalent to the identity functor of any other.

A physically realized, conventionally defined turing machine can not print out successive bits of an uncomputable real number, even by chance.

A physically realized, conventionally defined language with random bit streams taken as given, can.

And, as we know, the laws of quantum mechanics guarantee what we already know, that it is physically possible to construct a machine that produces as output an unbounded sequence of random bits.

For real world computing, therefore, a language with only bounded non-determism is objectively less powerful than any turing machine.

I don't think bits count unless they represent information.

So, given unbounded non-determinism and a machine that outputs an infinite number of random bits ... I don't care because the machine isn't outputting any information.

'bits' which we have no reason to believe are right or wrong or even related to anything else we know or want to know, contain no information. So it can possibly output the sequence of bits representing any functor? Okay... but in the same sense that an infinite number of monkeys might type the works of Shakespeare. Someone who didn't know Shakespeare's material could not learn anything about it by looking at the monkeys' output. Similarly, we don't learn anything about functors by reading a random output that might happen to match anything in the world including a functor.

So there are bits. But no information. I don't think that counts as being 'more powerful.'

information content of random bits

"So, given unbounded non-determinism and a machine that outputs an infinite number of random bits ... I don't care because the machine isn't outputting any information."

That premise is false. Counter-example: the measurement of the squared spin of a paired electron introduces information about the correlated spins of the two electrons, and that information about their spin does not exist in the observable history of the universe prior to the measurement. The distribution of these squared spins (always a 0 or 1) is reliably random in repeated experiments.

In computer science there are varieties of operators we could describe collective as CHOOSE operators. Truly random bits have unique and useful information for implementing unpredictable-yet-fair CHOOSE operators.

John H. Conway and Simon B. Kochen's two papers on the "Free Will Theorem" contain the (interesting, fun) proofs about squared spin of paired electrons and similar QM measurements by analogy.

signal and noise

A 'CHOOSE' operator does not add useful information to a process. It adds noise, not signal.

Unless we're specifically interested in the spin of electrons, observing that state and inputting it to a process is not meaningful, informative, or domain relevant. At most, it is useful.

signal and noise

"It adds noise, not signal"

A manifest CHOOSE producer of bits is a signal whose probability of uniqueness is assured by fundamental physics. And as I mentioned, in the case of paired particles, the output of CHOOSE can tell you faster than the speed of light what a certain measurement of a physical quantity far away will yield.

So, repeatedly insisting that it is "noise" is objectively false in at least those two senses.

Irrelevant information is noise

In information theory, signal is meaningful information, not merely information. To be meaningful, it naturally must also be domain relevant. Which part of information is signal, and which is noise, is inherently rather subjective. For example, when we were trying to listen to a specific conversation at a loud cocktail party, then that conversation is signal, and the rest is noise even if it could become signal if our intentions change.

Noise can be useful. People even create manageable models of noise, such as Perlin noise, for all sorts of use cases. Random bits can be used to construct a crypto key, or build a sim world. It just doesn't mean anything. Any pure entropy source, where bits have nothing to do with past or future and thus cannot be used to make useful analysis or prediction, is naturally nothing other than noise.

Further, the program using a probabilistic 'choose' operator doesn't even know where those bits are coming from, e.g. whether bits are from a quantum device, extracted from low-bit timings of button presses and network events, or based on the sound of a silent fart is abstracted away.

Of course, you could develop alternative non-deterministic abstractions where quantum entanglement is explicitly supported, e.g. such that we can arrange or know that a choice here is coupled to a choice over there. Nobody is saying that quantum pairing is useless, just that it certainly isn't useful while hidden under a 'choose' abstraction.

re "irrelevant information"

Now, to try to rescue your position, you've abruptly and without elaboration introduced the category of "relevance" of information.

Beyond silly. Highly likely to be unique random sources are extremely relevant to many human purposes and it is hard to believe that knowledge about paired particles is useless.

I think you're out of gas here, no further fart joke intended.

signal and noise

For a good entropy source, every observation is independent of history or future or anything external. Even if you observed it forever, and applied all sorts of analysis algorithms and tools, you'll never extract any structured signal, never learn anything about the world relevant or otherwise. It's the purest random noise.

If you want to discuss potential uses of quantum entangled entropy sources (e.g. such that a choice here is coupled to a choice there), starting with a 'choose' abstraction is a terrible idea because the abstraction does not maintain any relationship about the coupling. You'll need different abstractions that are explicitly about entanglement.

the vast wasteland is yours

You have convinced me that LtU was successfully conquered by pointlessly combative trolls. Why you choose to harm the community that way is a mystery but also a fact.

Sigh.

I at least agree that this discussion with you was pointless at best, harmful at worst.

I know you're reading malicious intent and sophistry into every word I say. But from my perspective, I have genuinely attempted to explain where you err on the positions of 'unbounded non-determinism is more powerful than Turing machines' and 'adding entropy to a process is adding meaningful information (aka signal)'. Doing so is not trolling, but I doubt I could convince you of that. Perhaps you'll return in a few weeks and think you were the one who was pointlessly combative with all your ad-hominem attacks.

But people are wrong on the internet every day, no need to point it out and upset them. It's especially useless if they aren't even seriously reading the counterpoints. It's truly better to have a community of people making mistakes than no community at all. I thank you for reminding me.

I don't mean to pick old

I don't mean to pick old scabs but Carl Hewitt was objectively correct that unbounded nondeterminism is more powerful than turing machines (when both are physically realized / approximated).

Sure, we can prove many falsehoods by assuming 1 = 0.

And, as we know, the laws of quantum mechanics guarantee what we already know, that it is physically possible to construct a machine that produces as output an unbounded sequence of random bits.

If by "unbounded" you don't mean "countable", then I'm curious what laws of QM entail constructions like an "unbounded" sequence of bits, and how you prove that they are truly unbounded in physical reality, as opposed to any unboundedness merely being a artifact of the formalism.

"artifact of formalism" (re: I don't mean to pick old...)

A Turing Machine can be made as powerful as Actors by adding to the Turing Machine the initialization of some (unbounded) part of the tape to random bits. With *that* addition, the claim that Actors are more powerful would fall, and Actors would only have the advantage of being more physically realistic -- a cleaner model (one might argue) for practical use in the physical world.

But without that possibility of an infinite supply of random bits already on the tape, a TM is formally not as powerful as an Actor.

Infinite vs unbounded

We don't need infinite tapes of input. A finite but unbounded sequence is adequate for any terminating computation, and finite but unbounded is exactly the restriction for number of non-blank cells on a Turing-machine tape.

We could easily model a tape of choices containing 1, 0, and blank, where reading a blank from the tape causes the machine to halt in an error state. This could model 'latent' input to an Turing machine where you want it (similar to ivars), i.e. continuing with extended input would be provably equivalent to restarting the computation with extended input. It's convenient to contemplate finite but unbounded input in context of partial and latent inputs because it avoids the silly issue of asking for infinite of something merely to ensure we have 'enough' of that thing.

I still don't see how QM

I still don't see how QM guarantees that we can create an uncountable number of bits. In fact, I'd say it simply can't provide any such guarantee; once you account for thermodynamics, any such alleged proof is plainly unphysical.

For instance, you'd hit the heat death of the universe before reaching your unbounded number of bits, which means only a bounded number of bits can ever be needed for any physically realizable computation. Thus, any physically realizable Actor computation is necessarily realizable as a Turing machine. Thus, any alleged difference in computational power is an artifact of the formalism that doesn't have physically meaningful significance.

Maybe the proofs in one formalism are more concise, but this seems more like expressiveness as David has pointed out.

re "i still don't see how QM"

"For instance, you'd hit the heat death of the universe before reaching your unbounded number of bits" ...

Sure, and manifest Turing Machines exist in the same numbers as perpetual motion machines. Nevertheless, we use these limit-case or least-upper-bound models because they are handy ways to reason about machines we can make.

At issue in LtU vs. Hewitt (and I guess me) on this is a question about the two idealized abstractions of computing.

The addition of an axiomatic "CHOOSE" arrow / function / operator / whatever makes a world of difference. A faithful implementation of classic Turing machine model lacks a CHOOSE. A faithful implementation of the Actor axioms includes a CHOOSE.

This has many practical implications already commonly practiced, such as using the physical reality of a CHOOSE operator to create bit-strings that are very likely unique over a given range of territory and time.

The addition of an axiomatic

The addition of an axiomatic "CHOOSE" arrow / function / operator / whatever makes a world of difference. A faithful implementation of classic Turing machine model lacks a CHOOSE. A faithful implementation of the Actor axioms includes a CHOOSE.

We have proofs that computing with the reals is more powerful than Turing machines, eg. hypercomputation, which can solve the Halting problem. Of course, this gains us nothing in physical reality because even if the reals actually existed, we're limited by measurement precision to a countable subset of the reals.

From my recollection, Hewitt's alleged proofs that the Actor model is more powerful than Turing machines hasn't produced any comparable demonstration of increased computational power. Do you disagree?

As I mentioned before, I can see how non-deterministic choice might improve expressiveness, but I haven't seen any widely accepted proof that it increases power in the same sense that hypercomputation increases power.

If you do think it increases computational power, would you agree that this additional power is so subtle that it's difficult to see the difference? That seems to be the main conclusion I draw from the contentiousness of this and all of Hewitt's previous threads.

You seem to hint that this difference actually has practical implications, but I haven't seen any such implications actually elaborated upon. Rather than speak in generalities, I often find it useful to start with specific examples, so if you have something in mind that might be a good place to start.

relative power of Actors model

From my recollection, Hewitt's alleged proofs that the Actor model is more powerful than Turing machines hasn't produced any comparable demonstration of increased computational power. Do you disagree?

I agree that Hewitt has not claimed to have solved the halting problem. :-)

It is also true that computers with CHOOSE can rather trivially produce outputs for which there is no corresponding classical TM implementation. This is what is obscured in proofs that NDTM == TM.

If you do think it increases computational power, would you agree that this additional power is so subtle that it's difficult to see the difference?

Not at all, though I would concede that the conventional pedagogy of the Chomsky Hierarchy obscures this very thoroughly, so I can believe it is made hard (by habit) for some people to acknowledge.

You seem to hint that this difference actually has practical implications

In effect, something akin to arbiters pervades real existing computing systems and it is one of the main computational phenomenon that theorizing about distributed systems contemplates.

A faithful way to understand Hewitt's claims, in my understanding, involves observing his (really quite interesting) attention to certain histories of ideas. He has worked to chase out a lot of inherited ghosts (like Goedel's problematic proof) and describe many of the same things in a potentially more rigorous way that also suggests clearer physical intuitions about realizable computing.

It is also true that

It is also true that computers with CHOOSE can rather trivially produce outputs for which there is no corresponding classical TM implementation. This is what is obscured in proofs that NDTM == TM.

I want to focus just on this one claim since the rest seems to follow from this. Apologies if you've already addressed this elsewhere here, but the other threads are hard to follow given finite time.

I agree that having non-deterministic choice makes some programs trivial, like a program P that produces a random bit string. For any given bit string B of length N produced by CHOOSE, it seems trivially true that there exists a Turing machine T that can also produce B using input shorter than N. I don't think you're disputing this claim, which is one way of looking at equivalence.

The only comparison I can think of that escapes equivalence, is if we further require that T be bisimilar to P for any random set of output lengths N. I don't think that any single T can satisfy this form of equivalence, even if there exists a T that can simulate any specific, finite set of output lengths. Is this the equivalence relation you're suggesting under which NDTM != TM?

If so, is this actually a meaningful equivalence failure? Just because they aren't bisimilar doesn't mean they aren't equivalent in every other way that matters.

re It is also true that (naasking)

In formal language theory there is a proof that says the set of languages that can be recognized by an NDTM is the same as the set of languages that can be recognized by a TM. That's what NDTM == TM means. Of course the proof is valid.

However, Turing gave examples of computing infinite sequences, such as a machine to compute the sequence "011011...".

Do there exist such infinite sequences that a TM could not compute? Yes, hence "On Computable Numbers".

Can a Non-Deterministic Turing Machine compute infinite sequences that a TM can not? Trivially.

For practical reasons and better intuition, though, Unbounded Determinism is better than NDTM as a description of concurrency in the form of communicating processes.

Non-determinism is not more powerful

I find it convenient to model non-determinism as an algebraic effect, e.g. a `fork` effect returning a boolean. Algebraic effects are a good way to represent latent interactions between a computation and its host environment, yet also make it obvious that we could intercept these effects to read from a tape rather than use an intrinsic physical implementation.

Non-deterministic choice is essentially just another input to a program. Any non-deterministic Turing machine can be viewed as a deterministic two-tape Turing machine, where one tape is simply an unbounded sequence of non-deterministic choices. Multi-tape Turing machines are known to be equivalent to single-tape machines, in the sense that we could systematically convert to a single-tape machine with the same behavior.

You talk about 'real' non-determinism where the source presumably involves the physical layer. But the study of computational power, i.e. which functions a machine can compute, has never been concerned with the provenance of program inputs.

re "is not more powerful"

"I find it convenient to model non-determinism as an algebraic effect, e.g. a `fork` effect returning a boolean."

In Actor semantics that is called an "arbiter" and the absence of anything like it in Turing Machine models is why Actors are more powerful than Turing Machines.

The arbiter abstraction is interesting to me because it is physically realistic.

Not more powerful

Algebraic effects don't affect 'power' (per Church and Turing). At most they affect 'expressiveness' (per Felleisen's definition).

Quantum Arbiters

An arbiter might be viewed as an abstract, unbounded, non-deterministic, latent input to your program. We might usefully tease these properties/concerns apart. For example, the observation that a computation with unbounded input can result in unbounded output isn't a very interesting one, yet is independent from whether the input is abstracted, latent, or happens to represent non-determinism.

IIUC, you find the arbiter interesting because a non-determinism source could feasibly be integrated directly, physically into a computing machine in ways that external inputs cannot. Hardware can have high-entropy sources built-in. Via QM, this entropy can arguably be 'true' random by our current comprehension of physics. Okay so far.

But the next step to "oh, also it's more powerful" - in the sense that we can extend the set of computable functions beyond the Turing machine - is the troublesome one. We cannot actually compute most real numbers using a discrete input source because computability of functions is, by definition, limited to terminating computations. And any terminating computation will only make a finite (but unbounded) number of observations on input, whether it's from a built-in source or not.

A better question is whether a true quantum computer would be more powerful than a Turing machine: if we never collapse a quantum state to form a discrete observation, this state is perhaps something that Turing machines (which are limited by finite, discrete alphabets) could not precisely represent. I don't know enough about QM to answer this question, but I doubt that any quantum computer whose state we configure with a finite, discrete program will truly escape its discrete foundations.

re quantum arbiters

I'm not sure how to reply to several of your replies to me because of a single thread that runs through all of them: the assumption that "halting" and "computable" are "by definition" the same.

In the conventions I am used to, a decimal representation of the constant PI is computable even though a process that does this computation does not halt.

Similarly, a Penrose tiling is computable, but a program that does so does not halt.

In contrast, what is conventionally called an uncomputable real lacks any algorithm that can compute it -- unless we include a CHOOSE operation / arrow / function / whatever.

computability and divergence

A decimal representation of an approximation of pi is computable. A non-decimal representation of pi as a program that generates a few digits paired with the next program is computable (cf codata). But a perfect decimal representation of pi is not computable. An infinite result cannot be produced by any algorithm.

If you want to make an exceptional claim like something is more powerful than Turing machines, choosing your own conventions introduces a high risk of equivocation.

Consider: if you accept an ongoing computation as having successfully computed the infinite result it is forever generating, then I can show you a deterministic algorithm that computes all of the real numbers.

Use of a non-deterministic 'choose' operator cannot improve computability. Setting aside the implicit selector input. For any algorithm that makes non-deterministic choices, there is an obvious deterministic algorithm that computes and observes all possible results.

re "computability and divergence"

One can construct a computation that for any Nth digit of pi, will eventually produce that digit, resources permitting. That is the conventional use. You are just doing sophistry for no gain.

Nth digit of pi

If you're saying that "a decimal representation of the constant PI is computable" means we "can construct a computation that for any Nth digit of pi, will eventually produce that digit", then you're being very sloppy with your use of language.

To me, these have two very different meanings, and two different truth values. The program is obviously not a decimal representation of pi, nor would it ever produce one when run.

This is relevant, in a sense. You cannot share a representation of a non-deterministic program to share a real number. Thus, we cannot claim that a non-deterministic program represents a real number the same way we might say a deterministic program represents pi. A non-deterministic program instead represents a whole set of numbers.

If language and conventions are interfering with proving your claim that non-determinism enhances computability, then I suggest you use a more formal language and take your argument to a math forum. I still recommend you be careful with definitions. Extraordinary claims require extraordinary proof.

not here for your trolling way

"then I suggest you use a more formal language "

These are fighting words because I have expressed myself just fine. I'm out. Have your last words.

It is not a sloppy usage of

It is not a sloppy usage of language, it is the standard meaning of that terminology, as described in here: https://en.m.wikipedia.org/wiki/Computable_number. I'm using wiki as a convenient source on a phone, but any basic textbook on computability will show the same usage. If we can output an approximation to within an arbitrary epsilon from a terminating computation then the number is computable. Omega (or Chatin's number) is an example of an uncomputable number.

After some superficial consideration I can't work out if an infinite (aleph-0) tape would allow calculation of Omega although from a vague memory of a talk on hyper-computation I suspect it would not.

Similar error.

There is a huge difference between, e.g. 'pi is computable' and 'the decimal representation of pi is computable'.

If pi is an output of a computation, we're free to use a finite, terminating program as our precise representation of pi. For example, a program that returns the Nth decimal of pi for any natural number input N. A program that returns this finite representation can. A program could also receive this representation of pi as a parameter on a finite input tape.

If you instead claim the decimal representation of pi is the output of a computation, you've constrained your choice of representations. A program that returns the decimal representation of pi would somehow need to return an infinite decimal representation then still terminate.

Before you specify representation, you naturally have more freedom to choose. We can find finite representations for many infinite objects, though via pigeonhole principle and countability of discrete program representations we can prove that *most* infinite things cannot have finite representation. Thus, computable reals are a tiny subset of reals.

In any case, the key observation is that computable reals are 'computable' because they have a finite, exact, and complete representation.

re huge distinction

"the decimal representation of pi is computable'."

Yeah, it's a conflation you falsely others of making, presumably at this point on purpose.

The hair that you are trying

The hair that you are trying to split is something that you have added into the discussion. The terminology that you are using seems to be slightly different to the standard definitions. It seems that the point that you making arises only because of this difference in terminology.

Pi is a computable real. There is a finite length description of a process that terminates that will output any finite substring of pi. This is the standard definition of computable and the standard definition of a computable real.

It seems that you have added the constraint that a computable real needs an algorithm to output all of its digits. This is not a standard definition and it leads into the point that you are picking at. Where has this non-standard definition arisen - could you provide some sort of reference?

Some things that would worry me about this alternative definition would include the fact that zeros are also digits, and as it stands no real (not even one that is an integer) could be output fully, so it does not seem like a useful definition.

This hair seems to have arisen from the earlier (more interesting) point about whether or not adding unbounded non-determinism adds power. Of course it does! One simple example would be found from the busy beavers, where a larger class of numbers would be output that from a machine without a similar mechanism.

Where I divert in argument from Thomas (and from Carl if I remember correctly) is that I doubt that unbounded nondeterminism is physically realizable. It is a magical definition, it seems unlikely to be realizable. That is a tired old argument, roughly equivalent to the addition of the right kind of fairness to a probabilistic TM (also probably non-realizable). Hence the diversion into QM.

All of this has been hashed over many times before and the arguments are all still there there for anybody who cares to go back and look.

The more interesting point seems to have been missed along the way: Thomas hand-waves a lot of interesting details (very explicitly and deliberately I must add) and those missing details are far more interesting than rehashing the tired old argument about nondeterminism.

I could not find any treatment of Conway's combinatorial games as the basis for a (computational) calculus. There are some obvious but uninteresting approaches: e.g. there is a bijection from (restricted) games to numbers, and a bijection from number to program (the obvious Gödel numbering). So that would permit execution but a lot of points that Thomas makes would be tricky to reason about.

Or there would be a more natural expression, closer in spirit but not necessarily in form, to lambda calculus. But that raises a lot of interesting questions such as what is the analogue of variable capture that would be natural in this setting and how would it look?

From the comments in Thomas' original post I suspect that he has looked at this in more depth and it would be interesting to see what he has come up with. The connection to the Surreals is hinted at very broadly and it would be very interesting to see how they affect the calculus that would be defined.

Not splitting hairs, just poorly explaining fundamental errors.

Pi has exact, finite representations, e.g. as programs that output digits. Thus, pi is computable.

A mistake I'm seeing in this thread is focus on the flashy 'outputs digits' part to the extent of ignoring the fundamental 'finite representation' part.

Finite representations can be inputs or outputs of other computations. Having finite representation is what makes a subset of reals computable and communicable.

Regarding non-termination with *incremental* outputs. We can always view this as codata: at any point in time, we have a pair (finite output so far, finite program continuation). If the program is deterministic, every such pair can be taken as yet another exact, finite representation of the same number.

This codata view is what supports the 'convention' Mr. Lord is misapplying to non-deterministic programs.

A non-terminating non-deterministic program can begin to output an infinite sequence of digits. Those digits may be viewed as representing a number. But the non-deterministic program continuation cannot be said to represent the remainder of the number being generated. Instead, it represents a set of possible numbers. The only representation of a real number in this scenario would be a final, impossible, infinite representation. A representation of a real number is never computed.

We can compute finite representations of some infinite objects. But infinite representations are not computable, and non-deterministic programs are not representations of the individual objects they output.

"point about whether or not adding unbounded non-determinism adds power. Of course it does!"

Except it doesn't. What non-determinism actually adds is an implicit input requirement.

With bounded non-determinism, we can predict an upper bound on number of bits required for this input. With unbounded non-determinism, we cannot predict (so incremental input will be more convenient), but it will still be finite for any terminating computation.

re "fundamental errors"

"A mistake I'm seeing in this thread is focus on the flashy 'outputs digits' part to the extent of ignoring the fundamental 'finite representation' part."

Nobody who you are, in effect, trolling has made the mistake you claim. If you respect this space, I think you should drop it entirely and read more carefully in the future. I do not have the sense that you respect this space.

Your argument is based on

Your argument is based on this mistake. It's just hidden by those 'conventions' you're blindly misusing.

Using conventions outside their original scope (in this case, *deterministic* programs with incremental output) can only be done safely if you prove from fundamentals that they'll work in the new conditions. Instead, you've repeatedly hand-waved the fundamentals, applied conventions outside their scope, and made some mistakes.

Even for non-deterministic programs, actual infinite output is impossible. And the 'convention' of using an incremental generator of infinite output as a lightweight proof of computability is not valid in context of non-determinism.

I do not have a sense you'll ever review your own logic or conclusions (at least not based on my words), nor that you'll actually attempt to engage my reasoning. So, you aren't my audience here anyways.

I suggest you drop your repeated disparaging comments, claims of trolling and sophistry (which I believe are undue, but won't engage lest we reduce the thread to name-calling), and other meta-arguments entirely.

Since you consider me a troll, I also suggest you research independent sources for why your very 'obvious' argument has never been accepted historically. You certainly aren't the first to contemplate the intersection of non-determinism and computability.

You're just making stuff up, dmbarbour

"Since you consider me a troll, I also suggest you research independent sources for why your very 'obvious' argument has never been accepted historically."

That's divorced from reality.

Seriously, go research.

Just a casual entry of keywords "non-determinism church-turing" into google scholar would be a good start.

The idea that non-determinism is more powerful is an intuitive one, even if incorrect. Challenges to the Church-Turing thesis come from every other crank with an interest in non-determinism, and there are many related papers. Even some of the specific arguments you're making were presented by Hewitt decades ago. But don't take my word for it. Do your own research.

Seriously, learn to read others more carefully.

As I said earlier, you win the site. Nobody said anything here to contradict the Curch-Turing Hypothesis. Nobody benefits from your condescension and obfuscation. This is the same shit that happened to Hewitt and it is a disgrace to a formerly great forum. It's all yours buddy. You are king of the worthless hill where other people used to have conversations that weren't so stupid.

Liar.

Your words: "Carl Hewitt was objectively correct that unbounded nondeterminism is more powerful than turing machines (when both are physically realized / approximated)."

Hewitt courageously challenged the Church-Turing thesis across several forums including LtU, despite being rejected everywhere.

The parenthetical caveat doesn't change any outcome. Where our non-deterministic choices come from isn't relevant, only whether we permit and observe unbounded non-deterministic choice inputs.

I think I need a break from you to cool my head. Since I win the site, how about you leave for three days. Will be good for both of us.

It would seem that some

It would seem that some sense is emerging from the swamp :) It would also appear that either the html formatting for comments is broken or that I can no longer remember how to format comments on LtU properly...

[snipped quite reasonable lead-up]

A non-terminating non-deterministic program can begin to output an infinite sequence of digits. Those digits may be viewed as representing a number. But the non-deterministic program continuation cannot be said to represent the remainder of the number being generated. Instead, it represents a set of possible numbers. The only representation of a real number in this scenario would be a final, impossible, infinite representation. A representation of a real number is never computed.

Here we arrive at the crux of the disagreement in this thread. As always when people have been arguing back and forth it has been obscured by the route that we have taken to get here, rather than highlighted.

There is nothing in your quote that I would disagree with, it is technically correct. I would probably split some hairs myself on an aspect of it: you are looking at the computation from a particular perspective so you have welded what you have written around whether or not the program can output a particular real. I would not look at it this way. So we do not arrive at the same conclusion. Instead I would arrive at the conclusion: this is not what I understood from reading Thomas' original comment. So rather than dive in with a counter-argument, personally I would try a different approach - ask him for a clarification on what he intended.

Except it doesn't. What non-determinism actually adds is an implicit input requirement.
With bounded non-determinism, we can predict an upper bound on number of bits required for this input.

This is not an argument against what I wrote, this is an argument against something different. We could argue the point in a separate thread, but to be honest we will not add anything to the body of comments already written.

Getting back to the point, the relevant quote from Thomas would appear to be this:

To the above we add a rule of construction that states a category can be defined as a unique, infinite, sequence of random bits. As many such random categories can be constructed as desired. It is impossible to prove that the identity functor for a given random category is equivalent to the identity functor of any other.
A physically realized, conventionally defined turing machine can not print out successive bits of an uncomputable real number, even by chance.
A physically realized, conventionally defined language with random bit streams taken as given, can.

So... there is a lot to unpack in there. It is not entirely clear to me what the intended meaning of that quote would be. I can see a few possible interpretations, that I would probably argue with in different ways, but I have no way of knowing which one is intended or even if I have understood the point that Thomas was making.

It seems late in the thread to make a genuine inquiry, but my point of confusion would probably start with the word unique. It is an interesting and exact word that seems to sit quite incongruously in the middle of that definition. I wonder in which sense a random stream of bits can be unique. Certainly a unique stream of bits would seem to have some kind of implications on the output of other unique random bit streams. A kind of global hidden state that would relate them in some way that does look somewhat similar to Carl's magic primitive. One starting point would be to find out (if Thomas still cares enough to explain) how such a unique stream of random bits would be realized.

utility and uniqueness

you are looking at the computation from a particular perspective so you have welded what you have written around whether or not the program can output a particular real

It is Mr. Lord who argues that a non-deterministic process has potential to output or represent an uncomputable real, thus proving greater power. I'm not hung on the issue; it's just very contextually relevant to arguing against his position.

If we were only interested in finite representations of sets that include uncomputable reals (instead of computing a particular number), this isn't a difficult problem. We could trivially do that with `[0.0, 1.0)`. Non-deterministic programs are also an acceptable way to represent such a set.

A non-deterministic 'choose' operator doesn't generally imply random entropy-based implementations. Choose can be implemented maliciously (e.g. try to find a set of choices that result in program error, useful for model-testing and debugging) or benignly (e.g. find a set of choices that satisfy a constraint model; genetic programming based on static choices and a fitness heuristic), or to represent a set (e.g. an implicit set of reachable states), or to abstract over unknown but not-actually-random variables (e.g. network latency, computation time, scheduler). When used for entropy, random input, non-determinism is mostly useful for simulations and crypto.

(Relatedly, it's perfectly legal under definition of actors model to have a fully deterministic implementation, i.e. non-determinism is abstraction over scheduler, not same as an entropy source.)

Ignoring the question of 'more power', non-determinism still has a lot of potential utility.

my point of confusion would probably start with the word unique. It is an interesting and exact word that seems to sit quite incongruously in the middle of that definition

This certainly isn't the first time this hypothesis was raised on LtU, what with Hewitt once stopping by every couple years when he was kicked from his other forums. Perhaps not even the first time by Mr. Lord.

IIRC, the issue of uniqueness was discussed, and it was simply explained that it isn't provably unique, just that the only valid proof of equivalence would be a proof of equivalent reference. Also, different streams will be cryptographically unique in practice. Also, given one random stream, we can trivially partition it (e.g. evens and odds).

IIRC, linear/unique types were raised efforts to fight the problem where the run-state of a non-deterministic program does not constitute a representation of its future output the way the run-state of a deterministic program does, and thus cannot be accepted as proof of computability. But this is sleight of hand. Instead of solving the problems of computable representation, it just abstracts access to choice behind linear bearer tokens and passes the problem to another layer.

re busy beavers

This is not an argument against what I wrote, this is an argument against something different

I wasn't clear. I mean that literally the *only* thing non-determinism adds to a program is an input requirement. Nothing beyond this. A non-deterministic choice is essentially an input fetch. All the stuff about how non-determinism might be implemented with a quantum entangled device is interesting, but isn't essential or assumed.

Adding inputs to an algorithm can add variability to its output, but it won't influence computability. Termination or progress or halting on insufficient input is possible regardless of source. Inputs of almost any type may be unbounded.

The only benefit non-determinism has over other input sources is that it's so unstructured; almost anything can provide a usable input.

You mention busy beaver, which I hadn't encountered before. It does have an NDTM extension, but it isn't random. It seems to be based on the idea of finding the largest terminating result for any choice input.

Uncomputable real numbers and non-determinism.

A general point about how the notion of an uncomputable real number does not have much significance. A Turing machine is not strictly physically realizable because, amongst things, it has an infinite tape. As it only exists in a mathematical universe of discourse, we could drop the convention that an input tape must have a finite length of non-blank symbols. A real number in the interval [0,1) could be represented as an input as an infinite sequence of bits. In the mathematical universe of discourse, if you claim to have a specific irrational number in mind, then you must at least be able to present a Cauchy sequence of converging rational numbers, specifically some function f: N -> Q, where N is the set of natural numbers, and Q is the set of rationals. If that is un-controversial in that universe, then what is controversial about being asked to also present a simpler function g: N -> {0,1}, or a tape of an infinite sequence of bits representing that real? There is clearly a single Turing Machine with a finite set of instructions, which can print out the first n bits of any irrational number supplied on the first input tape, along with n supplied on the second input tape. Uncomputable real numbers only exist, if you insist that Turing input tapes are finite, which is an arbitrary restriction in a boundless universe of discourse.

Strong non-determinism is a chimera, any notion of non-deterministic computation you care to present, can be deterministically emulated using suitable inputs, as already suggested here by dmbarbour. It is puzzling why there is such a large Process Algebra industry.

re: "Uncomputable real numbers only exist, if you insist that Tu

"Uncomputable real numbers only exist, if you insist that Turing input tapes are finite,"

That is exactly correct. If we extend traditional Turing Machine Models with a tape that is pre-initialize, at least in part, with an infinite number of random bits, then the Turing Machine and an Actor system are of equivalent power -- though it is not hard to see how the Actor formalism can produce simpler proofs, greater physical intuition, and so on.

In fact, it seems to me we can generalize starting from the observation that "an infinite tape of random bits" does not exist in any traditional constructive / intuitionistic mathematical theory. We can add to rules of construction a limited axiom of choice that (in effect) allows the arbitrary construction of countably infinite truly random sequences of bits and the resulting more powerful theories should still be considered constructive theories.

re: “a limited axiom of choice”

Thanks for your comment. It reminds me of an observation made in a seminar on random number generation, where the speaker said in half jest: “anyone who presents a truly random number generator has made a pact with Satan”. I agree that an irrational number supplied on an infinite tape in the boundless universe, is effectively truly random. Constructive theories are nice, and would like to know more about Actors, though my outlook is oriented to synchronous determinism (the planet’s surface can already be synchronised to nanosecond precision, thanks to our friends in high frequency stock trading). Thanks also for the mention of arbiters earlier which were new to me — they are a cool way of connecting devices out of sync with each other. I do not believe however they are optimal parallelism wise, or necessary to solve that problem.

re re limited axiom of choice

My main interest is in the practical construction of systems that are useful to humans, or conceptualization of systems that can explain natural phenomenon, both of which benefit from Hewitt's elaboration of bounded non-determinism. He was treated poorly on this site for saying so himself but that is water under the bridge, I suppose.

Alternative to Turing Machine

I forgot to mention that the Turing Machine is a first generation formal model which along the lambda calculus, does not support parallelism, concurrency, or RAM well. I presented a Turing Computable alternative called the Synchronic B-Ram which I claim does.

Representability =/= Computability.

Pi is not an uncomputable number. Computing its exact decimal representation is a nonterminating computation, but there are other ways to express the value itself, in particular as the one-digit symbol π. There are calculations for it which contain no steps that are necessarily infinite unless done in a positional number system, though most of them involving treating it as an axiomatic nodal number.

Likewise all the square and cube etc roots that are irrational, can be exactly represented in an exponential number system. sqrt(2) is the same as 2^-0.5 after all, and if our numeric representation alphabet is integer^power there's nothing 'uncomputable' about reaching that result.

More generally, any value such as pi, e, etc, which is a definite value to which an approximation can be iteratively refined in a deterministic way (the kind of number we typically express as a series), has a finite representation by Kolmogorov's theorem.

If the number system in use does not treat it as a nodal number, then its finite representation is a statement of its algorithm, but it is still a finite representation of a definite value, and can be treated in a numeric representation as a nodal number that needs no further computation to express.

Just wanted to inject into this discussion the reminder that computability of the representation of a number in a particular kind of numeric representation system does not count as the 'computability' of the number itself.

So I can accept without too much difficulty the notion that this kind of number can be the result of, or an input to, a finite, terminating computation and expressed in finite space, in principle; irrationality does not mean incomputabiity.

But you have to upack that; you can't just say you count such numbers as being 'computable' without pointing out that their widely-accepted 'uncomputability' is an artifact of typical numeric representations. If you make a statement known to be false under conventional axioms, you are working in a framework where one or more of those axioms does not hold and you have to say which.

However, this does not in any way involve, invoke, or take advantage of non-determinism, and pointing at nondeterminism as a way out of it does not seem relevant.

Yes.

We could represent pi as codata - a program that, when invoked, outputs a few digits of pi and the next program.

An exact decimal representation of pi is not computable, but there are many other options to represent pi. It's useful to make this distinction.

Not sure that anyone needed the clarification, tho.

re "not sure that anyone needed the clarification, tho"

Although this site is esoteric and likely has a modest audience, it is reasonable and probably helpful to assume that their are young learners reading (now or in the future). It's OK to rehearse well known things, especially when the well known things expose unresolved, even open-ended questions of interest. It's also a good way for more experienced practitioners to drag one another back to earth, every now and again. ;-)

re representability and computability

One might reasonably and usefully say (and many do) that a finite representation of a non-terminating computation that will eventually produce any desired digit of the decimal expansion of pi (or what have you) *is* a precise representation of pi itself. I think this has many practical applications but even if one contemplates it only for ontological purposes, keep in mind that no known physics compels us to believe that the universe we experience contains only bounded computations.

non-terminating computation of pi

a finite representation of a non-terminating computation that will eventually produce any desired digit of the decimal expansion of pi (or what have you) *is* a precise representation of pi itself

True. Given a non-terminating program with incremental output and a progress guarantee (i.e. such that we always produce more output after finite steps), we can produce a terminating program that outputs a requested range of the output. To do so, write an interpreter for the first program that halts when sufficient output is produced.

Unfortunately, this observation doesn't generalize to non-deterministic computations.

Nondeterministic =/= information-free

I categorically reject the notion that the ability of a nondeterministic process to produce any possible sequence of bits as output is meaningful or useful. Such bits, given that we have no reason to believe any particular thing about them, are useless noise whether or not we have defined a language according to which some subsequence of them happen to correspond to something useful. The infinite number of monkeys banging away at an infinite number of typewriters tells us nothing about the works of Shakespeare that might distinguish it from the works of Fitzgerald or Austen or Huxley or Stowe.

That said, nondeterminism as input can be useful when combined with a mathematical model for calculating something even if we can't derive a deterministic solution for it. Monte Carlo approximation for example can produce and refine an estimate of a definite integral of a function we don't know how to integrate.

Nondeterminism can be useful when combined with a methodology that preserves and refine initially random estimates according to a success criterion, for deriving systems that carry out tasks we can't specify formally, like the way an Image Search can use Artificial Neural Networks to distinguish spaniels from beagles.

This kind of stochastic calculation is encapsulated in some types. These represent useful data objects even though their products may not be computable by any formal, deterministic method we know.

They can be packaged together with a seeded pseudorandom number generator to form a deterministic computation, but in this case determinism is strictly inferior to randomness - because, given a *fixed* sequence of "random" inputs that happens not to fulfill the (usually inarticulable) properties required, these stochastic calculations can fail. But given truly random input, it would seem to be only a matter of time before these stochastic calculations succeed.

So randomness does have practical value and types requiring random input have practical value. But it's hard to relate that value to any reasonable conclusion about what is and is not computable. A stream of random or pseudorandom input would seem to have no value outside of stochastic methods. We turn to such methods in the absence of an ability to produce proof that the result is in fact correct. And in the absence of an ability to prove that something has in fact been computed, it's hard to assert computability.

Prefer to separate non-deterministic and random.

Per earlier post:

A non-deterministic 'choose' operator doesn't generally imply random entropy-based implementations. Choose can be implemented maliciously (e.g. try to find a set of choices that result in program error, useful for model-testing and debugging) or benignly (e.g. find a set of choices that satisfy a constraint model; genetic programming based on static choices and a fitness heuristic), or to represent a set (e.g. an implicit set of reachable states), or to abstract over unknown but not-actually-random variables (e.g. network latency, computation time). When used for entropy, random input, non-determinism is mostly useful for simulations and crypto.

For terminology with turing machines, random choice is closer to a "probabilistic" turing machine than a "non-deterministic" one. It's a useful distinction.

I agree that entropy isn't meaningful information, or signal. It's still useful noise, tho. Like how Perlin noise and variations have their use-cases, true entropy also has many use cases.

In any case, it feels a weird to see a reply about information and non-determinism at the top-level, which was a query about types. We're clearly off-topic here.

re "such bits, given that we have no reason to believe ..."

"we have no reason to believe any particular thing about them,..."

That is a false premise.

(a) We know that with incredibly exacting empirical confirmation that new information enters the universe this way. Again, see the Conway & Kochen free will theorem papers.

(b) We know with that same empirical strength how reliably long enough prefixes of these sequences are unique on the planet.

If you think the outcome of a literally unpredictable experiment is not information, I think you need to explain why it isn't.

I believe you and Ray are

I believe you and Ray are using different definitions of information. The information theoretic definition vs. the English language dictionary definition.

IMO, your counterpoints are valid under your def, yet irrelevant under his.

re "you and Ray"

Observations of natural facts that are not determined by the entire history of even the idealized observable universe are material information. New information constantly and relentlessly enters the universe and that new information is called new because it is not determined by its history.

In saying so we aren't contradicting Shannon or anything like that.

In pointing out how bounded non-determinism as mathematical idealization of a materially realizable computation has greater "power" (that's the one informal term you could have but didn't pick on) none of your darlings are being killed -- you're just ranting at people without making a good effort to understand what they are saying.

"Observations of natural

"Observations of natural facts" is a useful key phrase here.

A problem is that a 'choose' operator abstracts over these observations. The returned bit is divorced from natural facts.

At this point, it's just a bit. In the information theoretic sense, this bit is still information because information is measure of surprise. But in the natural language sense, it isn't information because there is no known relationship to the world or observed facts.

It's like the difference between "3" and "3 is the number of apples in my kitchen". Only one of these phrases is considered information in the natural language meaning, even if both have the same origin. Break the connection between a value and its relationship to the world, and it's suddenly just a value, not information.

Also, you're in no position to talk about not making a good effort to understand other people, cf. your responses to Ray. And even if *you* are using the word 'power' informally, Hewitt wasn't, and it's his position you reference when bringing that word into play.

Reflexion

Useful theories of math, including theories of computation, include a capacity to reflect on their material foundation.

Re Reflexion

Reflection can be useful, no argument there.

Of course, there are also many useful theories of math and computation that do not have reflection. Especially not at the level of material foundations.

One of my dreams is a large build system with pervasive, implicit provenance tracking. I.e. such that we can track individual bits in the generated artifacts back to individual bits in the sources. When maintaining metadata manually, it's way too easy to accidentally lose information across layers of abstraction and staged computing.

But even that won't touch material foundations.

Regarding reflection over non-deterministic choice: In most cases, programs simply won't care where the choice comes from. Even if that information were provided, it will be abstracted by the program. I wonder under which conditions revealing the entropy source to the program would be relevant and useful?

fine example (re "Re Reflexion")

While the word "reflexive" sometimes has a particular technical meaning within a mathematical formalism, e.g. the concept in programming language theory of a "reflective tower", it has a broader meaning in critical theory. It refers generally to the idea that a theory - including the discourse outside any formalisms it contains - can contain an account of its historical presence and material function. Reflection in a discourse, including a mathematical discourse, in this sense refers to a capacity to explain why we are talking about a certain thing, in certain places, in certain historical times.

Formal concepts such as arbiters or a CHOOSE operator are motivated by physically realizable examples of practical unbounded non-determinism. In fact, very little physical apparatus is needed to realize these concepts in reality. In fact, the specifics of the randomness of the output of these devices has a profound physics interpretation. And all of this is historically recent knowledge, not even 100 years old. Astounding.

When we make a computer that includes such elements - e.g. arbiters - it can very literally and observably do things that can't be done with a realized computing system that strictly implements a classical turing machine. Conversely, the same systems can also do everything a turning machine does. That makes them *more powerful* in trivial, obvious sense. It's handy to name this fact. And one helpful name for it is "bounded non-determinism'.

When you discuss with the presumption that someone else - someone else who may very well know things you don't - has used a word like "reflexion" incorrectly, and when you approach that discussion with false authority and condescension, you ensure that there is no fun and little chance of productive talk in a forum like this.

That is how Hewitt was chased off. It's what you're doing. It's certainly hard to prevent or banish. It's certainly easy by doing what you are doing to make a site or other forum uninteresting or offensive.

dup

dup

re fine example

I'm definitely more used to the formal meaning of reflection in computer sciences. I agree that the meaning in critical theory can also be relevant and useful in some contexts. Though, in this specific context, I don't see how it would be relevant to a program wondering how random bits with abstract provenance are related to state of the universe.

Formal concepts such as arbiters or a CHOOSE operator are motivated by physically realizable examples

We should certainly explore alternative computation primitives that are physically supported.

Naturally, this has been done a lot in history, e.g. there are probabilistic TMs (which are very relevant to your arguments) and quantum TMs, etc. inspired from physical capabilities. And for every one of these, the idea of whether new functions are effectively computable, i.e. whether there are things that can be done with the new formalism that can't be done with classic Turing machines, is thoroughly explored. Nonetheless, the Church-Turing thesis remains undefeated.

To clarify, the choice operator of a probabilistic TM is theoretically 'perfect' already, i.e. with choices completely independent of each other and other inputs, observable with exactly the specified probabilities. No need to assume the limits of physical approximation. Yet, the probabilistic TM still cannot do anything that a classic TM cannot do

When we make a computer that includes such elements - e.g. arbiters - it can very literally and observably do things that can't be done with a realized computing system that strictly implements a classical turing machine.

You say this, but it's untrue. It's also a casual, implicit, "take my word for it" dismissal of the Church-Turing thesis. I hope you don't expect me to take your word for it, especially given that exploration of probabilistic TMs started decades ago and still nobody has proven they can do anything that classical TMs cannot.

Physical implementation doesn't make a relevant difference here - it neither appreciably weakens the classic TM (we only need unbounded resources, not infinite), nor improves the abilities of a probabilistic computation beyond what the formal models describe.

when you approach that discussion with false authority and condescension

I feel the same about your arguments, including the ones where you keep explicitly saying stuff like this.

That is how Hewitt was chased off.

Incorrect. Hewitt's primary forum was never LtU. He only came to hang out on LtU whenever he was temp-banned from his favorite forums for using sock-puppets, or when he had a new paper to push.

"Nonetheless, the Church-Turing thesis remains undefeated."

As you know, nobody here has claimed to falsify that hypothesis.

Liar.

Your words: "Carl Hewitt was objectively correct that unbounded nondeterminism is more powerful than turing machines (when both are physically realized / approximated)." Carl Hewitt explicitly claimed to falsify the Church-Turing thesis by arguing that his model with unbounded nondeterminism is more powerful than Turing Machines. The parenthetical caveat doesn't make a difference. The century-old discussions of effective computability have always included viable physical realization / approximation.

Even more recently, you just now said: "When we make a computer that includes such elements - e.g. arbiters - it can very literally and observably do things that can't be done with a realized computing system that strictly implements a classical turing machine." This is also a claim that, if true, would falsify the Church-Turing thesis. If you were just talking about efficiency or convenience of doing old things, it wouldn't contradict. But you claim it can observably do new things.

To say that nobody here has claimed to falsify that hypothesis is objectively untrue. You're not explicitly using the words "I've falsified the Church-Turing thesis", but your claims speak for themselves. And it is not by some accidental formal use of the word 'power' inherited from your reference to Hewitt.

re "Liar"

Even more recently, you just now said: "When we make a computer that includes such elements - e.g. arbiters - it can very literally and observably do things that can't be done with a realized computing system that strictly implements a classical turing machine." This is also a claim that, if true, would falsify the Church-Turing thesis.

The best I can suggest in the face of your relentless assault is that you ought to at least try to see why I say there is no contradiction there.

pulling teeth

Following your example, I should never volunteer relevant reasoning or explanation to a discussion.

But I'll bite. Why do you believe there is no contradiction?

Information about the universe does not "enter" the universe.

dmbarbour diagnoses the problem with our discussion here correctly.

Re: the claim that new information enters the universe via random processes, we're foundering on the point that we're using the word "information" differently.

I don't acknowledge as "information" a sequence of bits that do not reveal something about the universe that exists before the time those bits become available. The bits created by a random process are noise because they reveal nothing beyond their own value and that value reveals nothing about the universe as it existed prior to their creation.

You may have a definition of "information" that includes values that can be relevant about things that exist after those bits become available - things computed from them for example.

I do not share it. I do not accept as valid any definition of "information" that fails to give us some way to distinguish it from "noise."

information "entering" the universe

I wish you would spend some time with those Conway and Kochen papers.

The sense in which information "enters" the universe is this: No amount of information about the history of the universe prior to the measurement of (e.g.) the squared spin of an electron is sufficient to predict that squared spin, at least if some of the most well tested results of physics are true.

The proof is very interesting, combining a very well tested result about measurement of squared spin on three orthogonal axes with a geometric 3D construct. This is captured in what Conway and Kochen dubbed the SPIN axiom, incorporating the insight originally due to Simon B. Kochen and Ernst Specker. Perhaps some future physical discovery will falsify the SPIN axiom, but at the moment there are few things as well tested as that axiom.

The Wikipedia articles on the "Free Will Theorem" aren't a bad place to start though I think the two papers, plus Conways video-recorded lectures on them, are kinda accessible. (There are nuanced parts of the rebuttals to counter-arguments that I am sure I don't personally fully understand.)

Right, I understand what you're saying now. I just don't agree.

I have read the Conway (RIP) papers. He was describing a particular mathematical model and had taken what were otherwise english words to have specific jargon definitions to describe his model. But that jargon is categorically NOT what the words mean.

I agree with you that no information in the universe at the time of the random input is sufficient to predict it. If that's sufficient for you to find value in it, then I wish you happiness. If that's the idea of "information" you like, then information has entered the world.

To me, that's not the point.

The point is that the converse is also true: The random output is not related to any information in the universe at the time of its production. Therefore it is meaningless. Therefore it is not information in any way I find sensible.

Not trying to convince you of anything, but I am genuinely curious. In light of this odd notion of information, is there anything in the universe that you regard as noise?

Is the sequence of cards in a deck that's about to be shuffled "information" for you, given that there is no remaining causal link between that sequence and the rest of the universe? If you can regard as information a fresh random output that has no causal link to the rest of the universe at its creation, does the same apply at the point of destruction?

re: shuffled cards

Is the sequence of cards in a deck that's about to be shuffled "information" for you,

Of course. For example, if you have that information in a timely way, you can bankrupt the house at blackjack.

You can't. Shuffle destroys the sequence before it can matter.

If you had had the sequence of the deck just prior to its being dealt, that would certainly be information. As you note, that is relevant to your betting strategy.

In PL terms, the sequence of a deck that's about to be shuffled is not relevant to anything. It's like a "dead variable" in that its value will not affect the process again prior to the next time it is overwritten.

Which may be a better, more direct context for asking the question. Is the value of a dead variable "information?" It has no further effect on the process. To me, it is noise. I don't know if your definition of information permits anything to be considered noise.

Did misread you, Ray - sorry

I dd misread you Ray, sorry. (Thought you were asking about knowledge of the order in an already shuffled deck.)

But in the case, your questions asks if there is utility in the output of a program if that program's output is simply being discarded - it's not equivalent to a process computing successive digits of an uncomputable real with probability approaching 1.

(a) We know that with

(a) We know that with incredibly exacting empirical confirmation that new information enters the universe this way. Again, see the Conway & Kochen free will theorem papers.

It is not at all established that information enters the universe this way, even if we accepted random bits as "information". At the heart of the free will theorem is the assumption that experimenters can configure their instruments free from interference from that which they are measuring.

Of course, any hidden variable theory which would eliminate the "new information" that you describe, entails that measurements are contextual. Meaning, that either the system changes non-locally in response to measurement changes, or the measurement instruments cannot be freely configured to begin with. Many people have made a lot of noise trying to dismiss this last possibility without providing any real argument, but it is not in any real sense refuted. In this case, the free will theorem holds, but the conclusion is that it's definition of free will does not exist.

free will theorem metaphysics

It is correct that the theorem has two interpretations which admit no empirical distinction: universal determinism OR universal free will (including, e.g. a kind of semi-freedom of paired particles). Magical and inherently mysterious place we got here.

It is also true what I said: that the outcome of the squared spin measurement can not be predicted regardless of how perfect are the observations of all history prior to the experiment. Deterministic or free - it is new information either way.

"Of course, any hidden variable theory which would eliminate the "new information"

The only kind of "hidden variable" that can explain the phenomenon would mean that the universe we experience and are able to measure undergoes an increase in information when the squared spin measurement is measured. This is discussed at some length in the papers and lectures. If you prefer a formulation that says "information that already was there was perhaps revealed to us at the point of measurement" that seems to me a distinction without any practical difference -- an unscientific claim.

It is also true what I said:

It is also true what I said: that the outcome of the squared spin measurement can not be predicted regardless of how perfect are the observations of all history prior to the experiment. Deterministic or free - it is new information either way.

To be precise, it's new information to the experimenter, but not necessarily new information to the universe.

If you prefer a formulation that says "information that already was there was perhaps revealed to us at the point of measurement" that seems to me a distinction without any practical difference -- an unscientific claim.

It actually has significant practical differences. Epistemically, the formalisms are entirely different which means the types of experiments that will occur to us to perform will be very different. For instance, most physicists had abandoned all consideration of hidden variable theories until Bell became enamored with Bohm's theory and so devised Bell's theorem, and which led to all sorts of important experiments and innovation in the fundamentals of QM. It's debatable whether this would have occured if no one was still considering hidden variable theories.

Metaphysically, all information must be carried by particles or fields, and so if new information is constantly being created then conservation of energy does not hold. So either information is not carried by particles (if not, then by what?), or the operation you're discussing does not actually create new information but is doing something else, or conservation of energy does not hold.

Chance would be a fine thing

A physically realized, conventionally defined turing machine can not print out successive bits of an uncomputable real number, even by chance.

There does not seem to be a reason why this would be true. If the TM has access to a source of randomness then it could - although the chance would be vanishingly small.

Certainly it seems to be true that if the TM did output the correct digits then we would not know that it had, as we would have no way of testing it.

e.g. I cannot write a program that would output Chaitin's constant (deliberately). If I take a trivial program that outputs a random bit-string, then it may output the correct bitstring for Chaitin's constant, but I would have no way of knowing if it had happened.

re chance would be a fine thing

If the TM has access to a source of randomness then it could

Not quite. First, if a TM has access to randomness, it's clearly 'probabilistic TM', and obviously not a 'conventionally defined TM'. Second, a feature to print outputs before termination is not included in conventional TMs.

It isn't wrong to say TMs can't do this. It's only wrong to say that a non-terminating probabilistic machine with a print function *can* do so.

And yeah, random outputs don't allow you to say very much about the output unless you have some filtering mechanisms. Probabilistic models are much more interesting in context of constraint systems, fitness heuristics and machine learning, etc..

I’m interpreting the

I’m interpreting the phrase “printing out” as meaning “leaving the bits on the tape at termination”. This seems like a reasonable enough interpretation.

Printing out “successive bits” does not necessarily mean outputting the whole value. Successive bits could be a finite substring. Under these two interpretations I don’t see a reason that a non-deterministic TM would be unable to output the correct sequence by chance.

"Successive bits could be a finite substring"

Lol. Under this interpretation, I might argue that "1234567890" is a substring of uncountably infinitely many uncomputable real numbers. Then a simple program that *deterministically* prints these digits is entirely sufficient to prove a deterministic program outputs successive digits of an uncomputable real number.

But I'm certainly willing to assume that Mr. Lord doesn't mean anything so trivial as 'successive bits' means 'finite substring'. I believe his intention is that a non-terminating program that, if run for infinite steps, prints all infinity digits of a real number.

I’m interpreting the phrase “printing out” as meaning “leaving the bits on the tape at termination”.

Since Mr. Lord is depending on non-terminating programs to make his argument, he truly means printing output bits before termination. It isn't a big deal, though, since a trivial codata transformation could support this. (We only need a guarantee of not diverging *between* printing digits.)

IIUC, his overall argument is can be summarized as follows:

  1. A non-terminating, deterministic program that prints digits of pi is accepted as a proof that pi is computable.
  2. Therefore, a non-terminating, non-determininistic program that prints random digits should be accepted as a proof that the number represented is computable.
  3. The real number represented by an infinite stream of digits is potentially uncomputable. (I'll add: By pigeonhole principle, shoving uncountably infinite reals into countably infinite programs, the probability of producing a computable real by this means should be vanishingly small.)
  4. Therefore, a non-deterministic machine should be accepted as computing outputs that are uncomputable for deterministic machines. (Unspoken: therefore, Church-Turing thesis is disproven)

The problem is at step 2. If we're just looking at the surface similarities, the output digits, it does seem intuitive to generalize from 1 to 2. But it's wrong. The reason step 1 works is only due to determinism: we can capture a deterministic program's run-state as a finite representation of its infinite future output.

re "M would be unable to output the correct sequence "

What is the "correct sequence" here? The action of a CHOOSE operator has only a stochastic correctness, and as far as experiments can tell, the more the operator is used the more measurably random is the output with a probability approaching 1.

Why is halting relevant here? Are we no longer allowed to consider unbounded processes as computation? I missed that memo. :-)

Infinite vs unbounded

You keep using the word 'unbounded' with the meaning of 'infinite' or non-terminating. This is wrong, and you should stop.

Unbounded non-determinism does not mean infinite non-determinism. It literally just means there's no upper bound restriction on the number of choices. In context of Turing Machine computations, tapes hold a finite but unbounded amount of info. A halting computation takes a finite but unbounded number of steps.

Halting is relevant for the consensus definition of 'effectively computable' functions. If a computation of a result takes forever, that result is not effectively computable.

A computation that prints digits over time can be viewed as computing a monotonic function from `Time → Printed Digits`. For any finite time, there is a finite number of printed digits. But it would be an error to hand-wave time to infinity then argue you've effectively computed an infinite sequence of digits. Do not conflate incremental with infinite.

re: Infinite vs. unbounded

dmarbour,

I wrote:

Why is halting relevant here? Are we no longer allowed to consider unbounded processes as computation? I missed that memo. :-)

And in reply, in part, you wrote:

You keep using the word 'unbounded' with the meaning of 'infinite' or non-terminating. This is wrong, and you should stop.

(emphasis added)

Unbounded:

Think of a program that listens for input, but meanwhile sends the string "hello world" repeatedly on output. If and when input arrives, the program halts.

That program describes an unbounded process. It is not determined from the text of the program itself whether or not it describes an infinite process.

The program you describe can

The program you describe can be modeled as computing a function from temporal inputs to temporal outputs. Such inputs and outputs should be modeled explicitly when expressing the program behavior in a context where time is not implicit.

Unbounded simply means we don't have bounds. We might consider this in context of bounded buffers, bounded memory, bounded number of steps. For embedded and real-time systems, having bounds is quite useful for reasoning about resource consumption. If you know an algorithm will eventually produce a result for valid inputs, but you cannot predict how many steps it will take, that's unbounded. Unbounded non-determinism simply means we cannot provide a bound on number of *choice* steps.

A potentially non-terminating process is obviously unbounded in number of steps, but not all unbounded processes are non-terminating. We should avoid conflating these concepts - they aren't equivalent, just overlapping.

" If the TM has access to a source of randomness"

" If the TM has access to a source of randomness"

That was precisely Hewitt's point. Classical Turing Machine formalisms generally do not include that source. Adding that source adds to their capabilities. Additionally,he pointed out, the Actor formalism is a nice way to express that more general form of computation, not least because of the way it suggests intuitions about networked systems.

How?

How does adding non-determinism increase their capabilities? Do you have a specific example of a function that could only be computed by adding randomness to illustrate what you mean?

re "Do you have a specific example of a function ..."

Do you have a specific example of a function that could only be computed by adding randomness to illustrate what you mean?

Trivially, one could say simply that an example is a process that outputs an unbounded number of bits, each bit arriving in a bounded amount of time, that is universally unique with a probability approaching 1 as the number of bits grows.

This is useful for randomly sampling things, for fairness among concurrent processes, for reasoning about networked computations, for cryptography, and, who knows what else.

And it's already current practice, even if theory lags.

Hypercomputtion

I think it is clear what you are arguing if I stitch together replies across the different sub-threads. Is this a fair summary (my wording obviously) ?

If we take the conventional machinery of a TM (or express it under a different formulation) then add a source of randomness and a way of outputting results while computation progresses then we arrive at a machine that has different capabilities to a TM. This does not refute the Church-Turing thesis.

If this is a fair summary of what you've said then it is relatively uncontroversial. TMs were intended as a way to explore the set of computable functions. Ongoing processes are a different beast entirely. There is actually a detailed form of this argument in Toby Ord's thesis Hypercomputation: computing more than the Turing machine. In particular it gets into the details of exactly what the CT-thesis is and is not that have been lacking from other threads, and some of the ways that TMs can be extended to output results without terminating.

re hypercomputation

The Church-Turing hypothesis is that all effectively computable functions can be computed (without regards to efficiency or convenience of expression) on a classic TM. If we can find any machine that is able to compute more functions than the TM, we've still refuted the Church-Turing thesis.

If probabilistic TMs - i.e. TMs plus random choice - could effectively compute more functions than TMs, then the thesis would have been toppled decades ago. IIUC, mostly it just reduces to "oh, there's an implicit unbounded input parameter to the function". There is an open question about whether probabilistic machines have more power within certain complexity classes (e.g. polynomial time), however.

Regarding incremental output streams for non-terminating computations: we can easily interpret a non-terminating computation with incremental output within another machine of the same capabilities (e.g. interpret a probabilistic TM within a probabilistic TM). Anything that a streaming output from non-terminating machines can contribute to an observable computation can already be modeled.

I'm not convinced that your summary would be "relatively uncontroversial".

I do appreciate the ref to hypercomputation. I've heard the word before, but it isn't part of my recall memory. Will check it out. Just reading the abstract, somewhat reminds me of the oracle machines.

Regarding incremental

Regarding incremental output streams for non-terminating computations: we can easily interpret a non-terminating computation with incremental output within another machine of the same capabilities (e.g. interpret a probabilistic TM within a probabilistic TM).

This does not make any sense.

Anything that a streaming output from non-terminating machines can contribute to an observable computation can already be modeled.
I'm not convinced that your summary would be "relatively uncontroversial".

A TM is a model of computation that produces an output by terminating. A streaming process does not terminate: it is not computing a function. A model that describes a process has a capability that a TM does not: ongoing output. It does not have an increase in power: there are no functions that the process can compute that a TM could not. Where do you see controversy here?

re Regarding incremental

Apologies if I explained it poorly.

A streaming process does not terminate: it is not computing a function.

I would say that a streaming process is computing a function from monotonic inputs (such as time or an input stream) to monotonic outputs (e.g. a longer output sequence for larger time inputs).

Monotonicity, not non-termination, is the basis for streaming.

A TM is a model of computation that produces an output by terminating.

Consider a few points, numbered for discussion.

  1. Kahn Process Networks can easily be interpreted within a Turing Machine, yet are vastly more convenient than TMs for expressing and composing parallel, incremental, interactive, and streaming computations. For convenience and continued mental health, let's bypass the terrible awkwardness of TMs and contemplate evaluation of KPNs.
  2. A TM interpreter of a KPN cannot compute an infinite output stream. But it's certainly free to compute any requested finite prefix of an infinite stream. If we associate this sequence monotonically with 'time', we can compute a finite sequence up to a given time. For performance, we can leverage incremental computation of the KPN to avoid recomputation of the prefix.
  3. Non-terminating physical computations are very similar to those interpreted KPNs: we can compute and observe outputs for any finite time but never for infinite time. There is a finite physical representation of running process state suitable for incremental computing over time.
  4. The 'time' parameter is implicit for the streaming physical computation yet explicit for the TM interpreter of a KPN. Explicit vs. implicit is a essentially difference in how we express functions within a language or model. This is separable from which functions we express or compute.

I believe several arguments in this thread, if we could boil a swamp, would boil down to forgetting/refusing to explicitly model function inputs that are implicit in the advocated model.

Time and choice inputs are the victims here.

re: A TM is a model of computation that ...

A TM is a model of computation that produces an output by terminating.

I'll leave off by Turing wrote about machines that output by making marks on a tape while they were running and that could compute infinite sequences.

The halting problem was formulated in terms of the question of whether an arbitrarily chosen machine would ever print a particular symbol (regardless of whether or not it "kept running").

I don't know of the pedagogical origins of the common use of state diagrams with a designated start state and halt state. It certainly is handy for intuitively teaching formal language theory and can of course be used to state Turing's proof, but Turing certainly didn't equivocate "computable" with any kind of halting.

Computability and halting

We can always interpret one Turing Machine within another, reflect over behavior. Thus, non-terminating programs can freely be used as components, inputs, and outputs in context of effectively computable functions. If the halting problem was decidable, a function that computes this decision would obviously accept non-terminating programs among its inputs.

Producing a result is a requirement for effective computability of a function according to Church and Turing in 1936. The difference between halting vs. printing a desired result then continuing forever is negligible from a computability perspective. For example, if a program P continues running after observably producing the requested result, we can always wrap it with a TM that interprets P only until the requested result is produced, then halts.

We cannot effectively compute direct representations of infinite structure: the infinite representation is never produced. It is feasible to compute finite representations for a subset of infinite structures, e.g. a non-terminating program that print elements. But it's an error to conflate different representations and computability thereof.