Quantum Lambda Calculus

Quantum Lambda Calculus
Peter Selinger, Benoît Valiron

We discuss the design of a typed lambda calculus for quantum computation. After a brief discussion of the role of higher-order functions in quantum information theory, we define the quantum lambda calculus and its operational semantics. Safety invariants, such as the no-cloning property, are enforced by a static type system that is based on intuitionistic linear logic. We also describe a type inference algorithm, and a categorical semantics.

Quantum programming languages have been discussed before on LtU, but there hasn't been a lot of activity lately. I just came across this paper from 2009 that develops the idea of entangled functions.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I Like My Moon

I am with Einstein on this one, as far as I know, life is deterministic and all states of all particles and observers were just determined when God started the machine.

(Just a joke, I am completely clueless about physics.)


Here is implementation of entangle functions for Perl.

Actually, a quantum variable sounds a lot like a FRP signal: you can compose it and transform it without looking at its value like a signal. Once you look at its value, it get's a probabilistic value, which will be its value if someone else looks at it. Now this is where it get's weird, say we have a composition "c = a + b" where a and b are quantum variables, so c is entangled with a and b. Then I look at c's value, do we choose a probabilistic value for c or for a and b? If we choose one for c then we look at a, and choose a value for a, then b's value must necessarily be c - a.

But then again, I don't understand this very well.

I don't buy into quantum anymore

I looked at quantum for a few days, as far as I can see, quantum theory is a stopgap solution in the inability, so far, of physicists to come up with a unified theory which describes the dual nature of a photon, or photon-like particles.

I don't claim to understand it, but it goes back to a long debate between Einstein and Bohr, basically claiming "It doesn't add up!" and "Just shut up and calculate!" respectively. When in doubt, physicists will always choose the latter.

As far as I can see, it does't add up, and Einstein was right. You just shouldn't see too much in it.

[Btw, thanks for that explanation. Going to look at it a bit more.]

The good thing with quantum

The good thing with quantum theory is that it has been able to accurately describe physical phenomenon at a very small scale.

For example, Magnetic Resonance Imaging devices are completely based upon quantum effects, and we would not have been able to develop such devices (wich have a very practical impact) without that "stopgap solution".

I'm not sure I understand your comment (to say that "You shouldn't see too much in [quantum theory]" looks like overly provocative), but I assume you mean that while quantum theory "solves problem", you don't think it describes the deep nature of anything. I would disagree, but in any case I think we can build a programming paradigm from anything that works at some level, wether it describes some form of "reality" or not. For example, I am not aware of anyone claiming that Lambda Calculus is deeply connected to our physicial reality (you "shouldn't see too much in it"), but it is however a very interesting formal tool for programming language theory.

I shouldn't discuss this, but..

It's too nice not too ;)

I agree, you're just on the Bohr-end of the scale. It just worked too well for a long time. But, looking at it. There is a non-communication theorem, there is no retro-causality, entanglement seems to be nothing but correlation, there are a ton of quantum theories which try to fix some aspect of it, and physicists are moving away to theories which are local and fully deterministic again.

When I say you shouldn't see too much in it, I mean broad statements like "life is non-deterministic" (most physicists would object), "we all change the world by observing" (nonsense) or "we are all connected" (right). Physicist need funding and students etc. These are just some overstatements, or misrepresentations, of the propaganda machine.

I never really dove into quantum just because of these claims, I am just learning some of it now, not even sure retro-causility is a prediction of the theory, so I'll see what gives.

The Transactional Interpretation

I have only a layperson's qualitative understanding, but I think the Transactional Interpretation of Quantum Mechanics makes a lot of the bizarre quantum phenomenon seem less so.

If you're interested, start here: TIQM

for marcro: determinism is dead

John H. Conway and Simon Kochen have an interesting proof and some related discussion about determinism. "The Strong Free Will Theorem":


I also recommend the predecessor paper, same authors, "The Free Will Theorem" available via arxiv.

Some real measurable properties, of which are empirically quite confident, of some particles simply don't exist independently of their being measured. "The squared spin of a spin-1 particle on three orthogonal axes" is always found to be a permutation of (1, 1, 0) for any axes chosen. And yet, there is no consistent way to assign squared spins of 1 and 0 to all axes and still preserve that (1, 1, 0) property. (This is called the Kochen-Specker Paradox.) You, the dedicated determinist, might think "Aha! See, that's great for determinism! We are determined to never make any measurements inconsistent with that (1,1,0) invariant, even though it is impossible for the particle to simultaneously satisfy that invariant along every set of three orthogonal axes!" You would be partly right - more on that below.

Now, it is also possible to create a pair of entangled spin-1 particles which, after creation, each go their own way and become what relativity calls "space-like separated". If you measure the squared spins of the two particles on the same axes you always get the same result. Again, you don't need quantum theory per se to test this: you can build the apparatus and observe it yourself.

One neat thing about space-like separation, as Einstein showed, is that the from some perspectives, the measurement of particle A's spin occurs before particle B's, from other perspective, B's happens before A's. In other words, the choice of which axes to measure made for A and for B, are not causally connected: neither of them "comes first" in any consistently observable way and then influences the other. Now, you the dedicated determinist might say: "Aha! Determinism wins again! For the correlation of A's and B's measurements is most simply explained by some yet unknown feature of the universe, such as its initial state, or backwards-in-time messaging, or .... All of which are perfectly clockwork mechanisms!" And you would be partly right again, but see below.

Now, Conway and Kochen go on to say this: If we assume that the experimenters A and B are free to choose what axes to measure then, necessarily, the particles are at least semi-free to choose their spin on those axes. For the spin does not exist (can not consistently exist, thanks to Kochen-Specker) prior to the measurement (so if the experimenters can choose the axis, a particle can choose its spin). But "semi-free" because of the mysterious correlation between the two particles.

At this point, you the dedicated determinist might well be jumping up and down excitedly saying something like: "Aha! Now you've shown that free will exists if and only if it is a capacity shared not only by humans, but by mindless particles! That's a reductio ad absurdum! What better argument for a clockwork universe could you imagine than that?"

You, the dedicated determinist, are right that all of the observations above are consistent with a determined universe. However, they cause some problems for you if you want to call yourself a scientist:

First there is a logical problem which Conway and Kochen discuss in more depth in the earlier paper ("TFWT" rather than "TSWFT"). If your hypothesis is that the universe cunningly conspires to deterministically limit our experiments to ensure that A's and B's observations are always suitably correlated - then you no longer have any basis for applying empirical induction as a mode of scientific inference. No scientist can say they have disproved your metaphysical conclusions about determinism but they can say, with very high confidence, that your conclusions are not science but rather religion. You've concluded the existence of a deceiver, be he malevolent or benign or neutral, who has arranged the universe such that the rules of the clockwork may not be empirically discovered or confirmed (unless perhaps tomorrow, all the particles start behaving differently, the sky melts, the earth turns to jello and New Clues are divinely revealed). You've concluded that the top hits of highly confirmed empirical results lead to nothing but a rejection of the empirical method. You're a bit of an anti-DesCartes.

Second, determinism is by no means the obviously simplest explanation. Conway and Kochen have an explanation at least as simple and consistent with the evidence: Particles do indeed have choice but choice is only semi-free in ways that can not be locally predicted. (Conway and Kochen are more cautious than I might indicate here in assigning free choice to particles per se.) Furthermore, it is a plausible hypothesis that while, on average, in most circumstances, the free choices of aggregates of particles cancel one another out into a stochastic haze, our brains have features that select and amplify some choices - perhaps giving the most natural yet expressed scientific hypothesis of the origins of human senses of consciousness and free will.

A more down to earth bottom line is that Conway and Kochen's result seems to prove a pretty firm upper limit to any *scientific* resolution of the determinism / non-determinism / freedom questions. Unless particles stop acting like they have been observed to act in huge numbers of experiments, both determinism and freedom/non-determinism are consistent with the facts. Unless a new definition of science is found, only freedom/non-determinism is consistent with the logical underpinnings of the empirical method.

Einstein and Bohr were, understandably, both confused on this point. It took nearly a century for some of the brightest thinkers in these areas to figure that out and boil it down to fairly simple arguments. Check out those papers.

Thanks for that comment

I am going through the math at the moment. Can I get back to you in, cough, say a month or something (kidding here, looking at Susskind's video's on entanglement, guess it would take me a week or something if I go at it full-time).

I can't give you an answer for now except for that even Susskind seems to disagree, at the end of his first lecture in Stanford (google it, its on youtube), he gives an automata-theoretic explanation of his view of the world, and just claims: no branching forward, no branching backward, there is no non-determinism.

I'll see in another month.

[Btw, for a refutation of Bell's theorem there's a recent paper by Joy Christian.]

QM Math doesn't much matter + LtU relevance

Marco says: "I am going through the math at the moment. Can I get back to you in, cough, say a month or something (kidding here, looking at Susskind's video's on entanglement, guess it would take me a week or something if I go at it full-time)."

Good for you but you don't need any of that math for the Conway-Kochen proof. You need three scientific axioms: that "squared spin of spin-1 particles" follows a certain invariant when measured along any set of three orthogonal axes chosen from a certain finite set of axes -- that's an empirical fact strongly verified by experiment and predicted by QM math, but you don't need the QM math - just the prediction (stated as an axiom) plus empirical verification. That it is possible to "twin" certain particles so that they each then satisfy those invariants in exactly the same way -- again, predicted by QM but you don't need more than the empirically checkable axiomitization of the prediction. Finally, that researchers A and B, separated by space, can not exchange information about either's independent choice of a number from 1-44 faster than some finite speed (again, this could be empirically falsified and so it is a scientific axiom - but no experiment has yet been found to falsify it. Except fictionally, on Star Trek).

With those axioms plus some simple (high-school level) combinatorics and geometry, you get Conway and Kochan's Strong Free Will Theorem.

And one of the implications of that theorem is that either the axioms are false (which you can show by experiment, if that is the case) -- or else the question of determinism vs. free will is not a scientific question. Neither position is falsifiable. On the other hand, it is empirically proven (unless you have some refutation of one of the axioms) that if the researchers taking measurements have free enough will to choose the orientation of their equipment (which spin axes to measure) - then the particles have as much free will in the same technical sense to pick their spin on-the-fly. Free-will is either a ubiquitous property of the universe, or not there at all -- but which is the case is not a scientific question unless the axioms are false.

You can most definitely provide deterministic mathematical models of the universe that are consistent with Conway-Kocher. You can also provide free will models. You can not, unless one of their axioms is proved false by experiment, say that either deterministic or free-will models are better. Neither gives you any testable advantage in making predictions. Here is a limit of scientific knowledge.

And, LtU relevance:

I think of programming language design as, in part, an exercise in finding good (human friendlier) ways to talk about various imposed facts. Finding a language for universes characterized by the Conway-Kocher axioms - a language that gives solid intuitions about the known-knowns, the unknown-knowns, the known-unknowns, and the unknown-unknowns - is a challenge that QM interpretationists share in common with programming language designers.

But more concretely: in distributed systems, with I/O at nodes and finite speeds of communication between nodes - we have, even in a classical universe - systems that often satisfy the three axioms of Conway-Kochen. This is the case without even needing to restrict ourselves to "quantum computers".

Leslie Lamport wrote a beautiful pair of papers back when in which he motivates and derives algorithms for synchronizing concurrent sequential processes by applying an analogy to relativistic thinking - giving each process a world-line with a cone of history behind each point and cone of future in front of it. I would be surprised if the thought experiments described by Conway-Kochen can't yield similar insights in the artificially quantum world of distributed computing.

You ate my M&M?

An invariant associated with a particle just doesn't do it. Non-determinism defined as, "as long is I don't peek into the box, I don't know it's content," is invariant to whether that applies to one or two boxes, or two entangled boxes. Do two boxes make it more evident you have non-determinism? No. Correspondence != Non-determinism.

If anything such a setting would prove a deterministic world by Occams razor. Otherwise, we might as well assume that we are all a simulation in a toy of the pan-dimensional kid Qwsd-dfac, the name is actually the most direct translation of the number of his dimension. He likes to torture us btw, and his mom doesn't like him too.

There is something which would prove non-determinism. Abstractly, the non-deterministical translation of ( red | ( red | green) ). Where |, bar, is nondeterministic choice of course.

If the probability of such a process, and that should be verified carefully, where two choices are made, yields different distributions each time, I would be willing to assume we live in a non-deterministic world. There is a lot of handwaving here, nondeterminism just doesn't equal probabilism, but it would convince me. Because, of course, nondeterministically, ( red | ( red | green) ) equals (red | green).

For the rest, for the sceptic, you're right, its outside of science.
[I want it back, btw. Oh, and if QM predicts this, you can keep it.] [ I edited several mistakes out. ]


(deleted - was ill-considered. I apologize for the noise.)

Well Hey!

I make jokes, forget about em.

But actually, as far as I get it: An experiment with 50% chance on up or down and 100% correlation verified at two sites shows determinism, or more loose, probabilism. Why, because, if anything, there is no, and cannot be, a relation between nondeterminism and probability and everything observed is probabilistic and correlated. I don't know your 1-spin example, but I gather it is similar?

fuzzy/statistical reasoning

I think of programming language design as, in part, an exercise in finding good (human friendlier) ways to talk about various imposed facts.

This is very interesting. Related is we have a computation with some kind of error potential, where derived computations inherit that error potential. Or we could be 80% percent sure that a value is 42, and all dependent computations depend on this precision. A symbolic programming language (or one that works on lifted expressions) could include precision/reliability/fuzziness for any expression by default, without much loss in performance.


You just gave a perfect description of a Quantum Computer?

Pre-understood pre-comment which should be widely disregarded.

Even when I don't understand it yet, it seems to boil down to:

I have two M&Ms, one red, one green. I shake them, put them into two match boxes. Send one to observer Alice, and one to Bob.

Alice opens the box and collapses the state. Hey, its green!

Did anything mysterious happen? Did any information travel?

If the two M&Ms are related in any form more than statistical correspondence, then we should be able to exploit that fact, i.e. send information. If we cannot, its just statistical correspondence between undetermined, but absolute, states.

[I prefer M&Ms over marbles because they taste better.]

I have two M&Ms, one red,

I have two M&Ms, one red, one green. I shake them, put them into two match boxes. Send one to observer Alice, and one to Bob.

This is what's called a hidden variable theory, the assumption that the decision of which state each particle is in is made before they are sent on their way by some relationship that we are not aware of.

Bell's theorem is supposed to disprove this hypothesis, essentially by requiring any theory to give up either realism or locality, but as you noted in the other post, there are various attacks by which hidden variables can still sneak in (detector loopholes, and now Joy Christian purportedly attacked a hidden assumption in Bell's proof).

Assuming hidden variables are definitively ruled out, Bohm's interpretation of QM seems the simplest approach to QM.

Clouded Judgements

Notwithstanding all.

0. As far as I know there are no experiments which contradict a deterministic world.
1. If two particles are entangled physically in any way then there is exchange of information. This has never been observed.
2. [Removed a comment]

Now the theory comes, not before that.

[But I give in, this is not physcis.com, but LtU, and I suck at physics.]
[Oh, btw, no hidden variables. The question is whether you assume that while traveling,until observed, the M&Ms have the possibility to choose to become the red or green one, and whether collapse/entanglement is a physical attribute - or just a technicality of the theory, and what to think of that. Was just staging it.]

A better analogy

You seem to be missing the dilemma. A better M&M analogy would be:

An M&M BOX is a box with buttons red, green, and brown such that when you press a button the box dispenses an M&M colored differently than the button you selected. Now explain how to build a "tangled" pair of M&M BOXEN in such a way that the Nth button press of one box always produces the same color M&M as the Nth button press of the other.

Don't know that dilemma

Has something like that ever been observed? All I know is that you can build entangled pairs by manipulating close M&Ms, or splitting an M&M/create two M&Ms by destroying something else.

Or is this a question how to build specific entangled pairs of electrons in a Quantum Computer?

Has something like that ever been observed?

Has something like that ever been observed?

I haven't ever observed anything like that, but I think it's analogous in many ways (though much simplified) to what is discussed in the paper Thomas Lord linked on the Free Will Theorem.

Is Not

No, the argument consists of three steps. SPIN, TWIN, FIN. Its much like the EPR paradox, and Bell's inequality, but further worked out.

1. You can create some particle with an elusive property. (SPIN)
2. You can send two 'copies' of a particle over long distances, and when observed these will have the same quality. (TWIN)
3. You can observe that both particles have the same quality, besides being able to communicate that property and being able to know that quality in advance. (FIN)

The question is whether you seen any 'spooky' magic anywhere. I don't, I just see an incomplete theory.

[ No guts, no glory. In my view, there are three things wrong with the paper. One, SPIN/KS/Bell prove that no hidden variables can exist which relate 'entangled' events in QM. To me, a realist/relativist, that immediately shows that the theory is incomplete, its impossible to describe all physical systems with. Second, they refer to nondeterminism as a solution. Nondeterminism has _no_ empirical qualities, you cannot prove any physical system to be nondeterministic, its like trying to make a a-priori statement about the outcome of a coin where I have the possibility to flip it to head or tail. Third, all observed particles have probabilistic behavior, not nondeterministic, that makes it a no-brainer.]

Still seem to be missing the point

The way you've written 1-3, it's not clear that you understand the paradox. The major idea from QM upon which all of this depends is that the "elusive spin property" depends on the manner in which you measure it. I won't argue that my M&M box is a good analogy, since that's subjective, but it does at least capture this aspect of the paradox.

Conway-Kochen in M&M-analogy-speak

Marco, try this analogy for a SPIN:

I have a big bowl of red and green M&Ms, and a bunch of boxes with three compartments, labled 1, 2, and 3. Three drawers. One M&M goes in each.

We are separated by a light year, you and I.

You have some way of generating a "random" number that you are confident I can't predict. Maybe you read off background radiation as random bits or something.

Simultaneously (meaning, with space-like separation) you and I will each take these respective steps:

I will pick three M&M's out of my bowl and put them in the three drawers of the box.

You will, by any means you like, generate a random 2-element subset of the set {1, 2, 3}. Use radioactive decay, use radio noise, use digits of pi -- doesn't matter, just pick one of the sets {1, 2}, {1, 3}, or {2, 3}, being confident I can't guess your choice while I'm plucking M&Ms out of the bowl.

I'll send you my magic box of three M&Ms.

When you get the box, open the TWO drawers that are in your set. E.g., if your set is {1,3} - then open drawers 1 and 3.

The box has a built-in mechanism that, at that point, it will incinerate the remaining drawer's contents so that you will never know what color the third M&M was. Sorry, those were the only boxes we've been able to find. You simply can't know all three M&M colors but you can know any two of your choice.

Every single time, without fail, you will see one red and one green M&M.

How'd I do that? If you think about it, that's seemingly impossible:

If I put in two reds and one green then surely sometimes you would open two drawers and see two reds. If I put in two greens and one red, surely sometimes you would see two greens. If I put in only green or only red - obviously you would never see a red green pair.

Yet, every time, no matter what either of us do, there's one red and one green M&M in the drawers you open.

QM predicts "stuff like that" and that's been verified countless times by experiment. We can measure {1,2}, {1,3}, or {2,3} - and the result is always a red-green pair, even though whenever we measure 1, 2, or 3 each is either red or green.

That means that drawers 1, 2, and 3 can't possibly contain red or green UNTIL you open the drawers! (More precisely, the color can't be fixed until it is certain to the M&M itself which drawer you will open first.)

TWIN adds in a twist. Now I have two friends, you and some other person, each a light year away from me and two light years away from one another. I'm doing the same experiment with both of you but a new rule: I can only draw *two* same-color M&Ms at a time from the bowl, and put one each in your two boxes, in the same order. So you each get identical boxes. I gave you both red-red-green or red-green-red, or what have you. At least that's what I put in the drawers on my end.

You have one subset, {a, b} and my other friend has his own subset {c, d}. Since each is a subset of {1, 2, 3}, your two subsets have an intersection of at least one and possibly two. For example, maybe you'll open {1, 2} and he {1, 3}. Or maybe you'll both open {1, 2}. Or some other combination. Whatever the combination, you both open at least one drawer in common, and perhaps both drawers in common.

The colors in the respective drawers will always be equal. If you open drawer #1 and he opens drawer #1, you'll find the same color. You can open your two drawers in either order, and the result stays the same: even if "drawer #1" is actually the *second* drawer you open (but the first drawer he opens), it's color will still match "drawer #1" of my other friend.

That's a lot of weird facts to explain (why do the M&M's always agree that way, and how?) but, hold on, I have a "hidden variable" theory about M&M triplets. Maybe it's not so mysterious.

Hypothesis: Obviously, these aren't ordinary M&M's they are some kind of nano-tech M&Ms that are part of a secret embedded computer in the box that monitors which drawers you open and, at the last minute, alters the color of the contained M&M to satisfy the one-red/one-green constraint. So, M&M's aren't really red or green - they can be either as needs be - they just have a hidden robotic mechanism for picking color in response to the opening of particular drawers. The box must be wired, you see? And the M&Ms are more complicated than we thought but they're still just dumb machines.

There's a problem with that solution, though. At least according to Einstein:

TWIN tells us that if I send you and my other friend identical boxes (which I do), that you always get consistent colors-per-drawer, no matter what.

You are separated by light years yet it takes but a second to open the first drawer. And the otherwise indeterminate color of the M&M must be identically chosen at both locations.

The "hidden variable [nanotech] box" must, as you open the first drawer, have some knowledge of what second drawer you will open, and what two drawers my other friend will open.

My other friend's box has to have similar knowledge of what drawers you have chosen to open - because both boxes have to come to the same decision.

That is, both boxes must have knowledge of a 2-light-year-away event that is occurring simultaneously (in the relativistic sense). And each box may have to have knowledge of the future drawer openings in both locations - before it can correctly select the M&M color. It must know this before any speed of light signal passes between the two locations. It must know this even though there is no mathimatical function logically possible that can map the state of the universe before a a drawer #1 is open, to the state after drawer #1 is open, except as a probability of M&M color (50/50 in our analogy).

To put it more simply:

The axioms SPIN, TWIN, and MIN prove that in some measurable respects, the future of the universe is not a mathematical function of its past, for any (even infinite) encoding of that past or description of that function. Hidden variables don't work.

Oh golly, dropping the M&Ms

The problem is that we're just not observing M&Ms. But some property SPIN of _one_ particle, which I don't understand really, is it a prediction of the theory, an observed quality of 1-spin particles, or a refutation of hidden variables?

I assume it is both a prediction of the theory and confirmed.

First, SPIN says we'll find a permutation of (1,1,0). Not a permutation of (1,0), or (red, green). So your example goes a bit wrong there.

Of course, SPIN is not strange behavior. If I observe a ball which I bounced against a wall it will wobble in two dimension (flatten/stretch). No matter what I do, if I look at it from two axes, two sides will wobble, on the third axe I won't see a lot. (I handwave away looking at it from some other vector, because then we'll dive into 'how big is the probability that I'll see something,' which is exactly QM.) Whats strange about that?

Now the argument of Joy Christian's paper, on TWIN, Bell forgot that some numbers don't need to be reals, and, my analogy, all we're observing is similar to seeing the effect of a (moving) circle while we live on a line. How can it be that two points on a line behave similarly? Yeah well, because they are part of a circle.

Now if we look at that analogy, that even fails, because the two points are part of a circle, and thus that fact can be exploited - for example to send messages. It would be better to say that the two particles behave as if they are part of a circle, but are independent particles. Is that odd? No, as long as the math adds up thats perfectly solid.

Now I have two particles, the whole question boils down to: do we have a circle or not? I say no.

Now for SPIN+TWIN+FIN. You keep forgetting that nondeterminism cannot be proven, ever.

What's the difference between a hundred thousand nondeterministic coin flips and a hundred thousand probabilistic 50%/50% coin flips? You cannot say anything about the first except that any outcome will just be a hundred thousand flipped coins, there is no law of big numbers. That's the second problem, they see 50%/%50% behavior on spin up/down. That's not nondeterminism, that's the opposite. [And I don't see anyone assuming that there's some guy with a big white beard which decides to keep the distribution in order.]

[Actually, I guess SPIN is an incompleteness theorem, it feels like gimbal lock, but what do I know...]


First, SPIN says we'll find a permutation of (1,1,0). Not a permutation of (1,0), or (red, green). So your example goes a bit wrong there.

It doesn't "go wrong" I was just trying to simplify the math of the Kochen-Specker paradox for you. I apparently failed at constructing an easily understood analogy to a simpler case. I constructed a robust analogy to a simpler case but it failed to be easily understood.



Kochen-Conway say "SPIN+TWIN+FIN => particles have free will." I am not entirely sure they aren't trying to prove that QM is incomplete by just given an absurd argument, like EPR.

Marco: read the papers

Kochen-Conway say "SPIN+TWIN+FIN => particles have free will."

No, they don't.


Thomas: Then reread them yourself before you claim anything!

Straight from Wikipedia:

The free will theorem of John H. Conway and Simon B. Kochen states that, if we have a certain amount of "free will", then, subject to certain assumptions, so must some elementary particles.

I am sorry, but I am not the one here who isn't grokking it.

He's right. You really need to re-read it more carefully.

The Free Will theorem says: given SPIN, FIN and TWIN, if experimenters have free will then so do elementary particles. The bold part (which your re-statement omitted) is the important part. (BTW, I wrote the lines you quoted from Wikipedia.)

Ah well, okay.

Good, I concede. Remains that spin measured satisfies a distribution, which is just not free will.

[In my defense of my blatant mistake which is so nicely corrected. I'll state that we were concentrating on what the different SPIN/TWIN/FIN axioms actually are, and also observed that the end result as observed is not free will. There were just a number of explanations/interpretations mixed.]

the paper mentions....

The first paper (perhaps the second, I forget) mentions that SPIN+MIN+TWIN are compatible with deterministic models. Keep looking harder. It's a neat topic.

that will also confuse him

That will also confuse marco if he glosses over how they define "free will".

I didn't appreciate until thinking about this exchange with marco how while, on the one hand, none of the background you need to "get" the papers is hard or all that large, there's several concepts from a variety of fields that you need and that aren't "obvious" until you see them. You need a little bit of set theory and combinatorics to get the math behind SPIN and the significance SPIN+TWIN. You need a little bit causality and "light cones" (or at least "half universes") to get SPIN+TWIN+MIN. You need some philosophy of science to get why they call it the "The Free Will Theorem" instead of the "Complete Theories of Physics Are Impossible" theorem (and to get their cracks against "dedicated determinists"). You need some philosophy of the mind and epistomology ("what's a qualia?" at the very least), along with some neuropsych. to get the plausibility of their speculations about human free will in the first paper especially.

I don't know how Kochen and Conway got together on this but it's brilliant that they did. Conway's influence by way of his customary lucidity of explanation is palpable and positively drips of the page. But it's definitely not all that easy to grok without certain basic-level but diverse background. Which is kind of interesting in and of itself, if you ask me. It's a good challenge - like handing a high school kid Goedel's paper or a compact account of special relativity. It's a mistake (I speak from experience :-) to try to elucidate it casually in a few blog comments (unless your goal is to create more confusion than you destroy :-).

The background that is, at the moment, beyond me is in the objection that naasking linked to in another comment about the applicability of the theorem to stochastic models. That paper graciously mentions that Kochen doesn't agree with their objection and why but I don't have time right today to puzzle out what each side is saying and it just isn't obvious to me without spending a whole bunch of hours on it. It's going to nag at me, though, until I get around to it.

Finally, Duff (may I call you Duff?): I had a funny exchange with Kochen himself. I asked him (I'll state the question here better than I put it to him): why not twin a particle enough times that, on various copies, I can measure all 33 axes? I figured either we thus refute SPIN+TWIN+MIN or else blow up the universe or something. He replied that you just can't - no experiment like that will work - because of Kochen-Specker. Fair enough. I suggested that perhaps he had found a deep reason for entropy and the arrow of time. Which, oddly enough, would enhance the intuition that "free will" is a good description of what we're looking at.

Oh great, I am confused.

Man, it just boils down to whether it is interesting when a physical system is described by 'x=3+y' you should get metaphysical about that.

Who's confused?

[Unless you're Einstein and Bohr, they were asking the right questions.]

your confusion

"when a physical system is described by 'x=3+y'"

"Unless you're Einstein and Bohr, they were asking the right questions."

Stop making such wild shots from the hip and you might get somewhere.


I just observe that a 'collapse of the state space by observing' is nothing more than solving part of an equation. Feel free to get very excited about it, I don't.

Even if you believe in nondeterminism, there is the blatantly obvious fact that only determinism is observed, also on particles, and nondeterminism cannot be observed. The free will theorem is therefore already flawed in principle, the implication even given all axioms cannot hold. [1]

Einstein and Bohr at least had the right debate: Will any theory based on probabilities regarding observation ever lead to anything? What to do with with causality? Do we only need reals? Do all reals model all properties of the system? etc. Well, pragmatically it worked and lead to a lot of philosophical confusion. But, time's up, see 't-Hooft. It still may be the best they can think of, I don't think so.

Conway-Kochen. Well, I don't feel adding to the philosophical confusion by postulating a 'free will theorem' is helpful, they are terrific scientists, I hope there's more to it than that.

As far as the confusion goes, I say what I do understand, and what I don't understand, and in an informal debate people make mistakes. I don't get terribly excited about your misrepresentation of the experiment, I don't see any independent thinking on your side except for some fanboy-ism.

It is possible to show manners instead of being condescending, show you got a proper raising.

[1] This can be misunderstood. What I am saying here is that given a nondeterministic system A which forces a system B into a state you still can never decide whether system B is nondeterministic or not since it is impossible to observe nondeterminism. The implication even given all the axioms cannot hold.


We are discussing an article/theorem where the implication itself must be flawed, the axioms are debatable, and the implication of the theorem contradicts experiment.

It really is a balloon, no sense in discussing it further.

The objection that naasking linked to

The background that is, at the moment, beyond me is in the objection that naasking linked to in another comment about the applicability of the theorem to stochastic models. That paper graciously mentions that Kochen doesn't agree with their objection and why but I don't have time right today to puzzle out what each side is saying and it just isn't obvious to me without spending a whole bunch of hours on it. It's going to nag at me, though, until I get around to it.

I just looked at it, and while I have no idea what "relativistic Ghirardi–Rimini–Weber theory" entails, their simpler stochastic models seem wrong to me for the reason Kochen apparently gave. They give a table showing how to choose both outcomes (see the definition of P_{ab} on page 2) given knowledge of x,y,z, and w, and then note that outcome A is independent of w and outcome B is independent of x, y, and z.

But this setup is wrong - they have one random trial. With locality, when and where is this one random trial supposed to happen? In order to have locality, we really need two random trials - outcome A should be the result of a trial that doesn't depend on w, and outcome B should be the result of a trial that doesn't depend on x, y, or z.

thanks for that "trot"

That make some sense from the superficial skim I did but I decided not to think they were making that basic a mistake until I understood them better. I appreciate the "trot" (by which I mean "guide to reading") and look forward to having a chance to dig in and properly read their paper.


Also, I removed the word 'obviously' from my post above. It seems like a clear problem now once I've put my finger on it, but it took quite a bit of mulling for me to do that. I didn't really follow the criticism attributed to Kochen at first.

Does Einstein really say this?

Einstein theory is that no energy can go faster than the speed of the light, as we can't use those 'entangled' things to send data, I don't think that this is really a violation of Einstein's theory..

experimental confirmation

As I understand it, the SPIN axiom of Conway-Kochen is just that, an axiom for the purposes of a mathematical argument. Kochen-Specker proved in 1967 that QM implies (a more general form of) SPIN. Their original paper suggested experimental tests to verify QM on this point. Better tests were later devised. The SPIN-satisfying predictions of QM have been experimentally tested (and supported):


TWIN has been verified quite a lot, as far as I know. I believe that there is various functional technology which would fail to function of TWIN were false.

MIN is, of course, also strongly supported by evidence until and unless someone implements the equivalent of a Star Trek sub-space radio.

As far as I can see...

SPIN is predicted and confirmed. I haven't found that's related to Kochen-Specker, but I assume it is. It just seems to boil down to loss of a dimension after you gimbal lock the system, or its a particle which is gimbal locked, but that's my own interpretation. It isn't very interesting except for the fact that in QM you can probably drop a dimension, or sometimes a dimension is dropped, under circumstances, which is pretty bad for the theory, but no physicist or engineer will care about. Anyway, its true, I will not claim to understand it.

TWIN is confirmed, but the question is whether it is anymore meaningful than stating that if 'x = 3 + y' in a physical system, observing a value for 'x' or 'y,' at any point in time, and therefore knowing the other side of the equation, says anything about the physical system.

FIN is true too.

It boils down to TWIN, which is just EPR. They elaborated on the observation to derive free will for particles, given free will for experimenters, which is just making EPR more explicit, and dramatic, in a mathematical setting. 'Entangled' particles behave similarly, however we twist the instruments, we will see that they are related -this is almost exactly EPR - whether I measure spin over x, y, or z, I see the same. And again, it is a misrepresentation/interpretation of EPR, and is an exact replica of the Einstein/Bohr debate.

Anyway, I guess the only real interesting argument I gave anywhere is that free will for experimenters+SPIN+TWIN+FIN contradicts what is measured. An other way of saying it, if particles have free will, they would play jokes on us, and we would find examples where all particles decided to be up.

[Btw, you can drop Joy Christian. And also, almost all 'hidden variable' theories deal with a fundamental question of QM, whether probabilistic correspondence between reals is enough to derive a complete theory, and whether the are hidden local or non-local variables. I have the feeling that all Bell says is that when you have enough reals to describe the system, there are no hidden reals anymore. I am not sure he considered that when you define a number of equalities, some numbers just are equal. But I gave up on QM, unless I go through all the math, I cant claim to understand it. But well, looking at the theory, as an abstract entity, I don't find any surprises. Just surprising interpretations.
Which is normal, since I am just at the Einstein end of the scale.]


If the two M&Ms are related in any form more than statistical correspondence, then we should be able to exploit that fact

This is a good statement to make because it is a nice purely mathematical statement. You’re saying that if there were correlations between random variables that were anything other than statistical then we could exploit these to do things like sending more than one bit using a single bit.

It’s a good conjecture. There are alternatives to probability theory, Q say, with the property that a physical system described by Q cannot be modelled using probability theory and such that you can exploit the differences to send extra information.

However, there are also theories with the property that they are inconsistent with classical probability and which *don’t* allow you to send extra information.

So regardless of whether or not you believe quantum mechanics models our world, your claim is false from a purely mathematical standpoint.

Oh...mustn’t forget. One example of such an alternative probability theory is Q=quantum mechanics. A system which is impossible to describe using ordinary probability theory is the EPR experiment. No assignment of probabilities can reproduce results predicted by QM so it is not about “statistical correspondence”.

(This response is roughly based on arguments to be found in the work of Scott Aaronson and some papers I’ve read on arxiv. But usual disclaimers hold: ie. blame me for errors etc.)

Jaynes, Hestenes, Grandy

Unsurprisingly, E. T. Jaynes has some words on the EPR experiment and the (mis)application of probability theory to it. One place is Clearing up mysteries - the original goal. There are several other papers of his on this aspect and others of Quantum Mechanics. Jaynes is rather critical of the usual interpretations of quantum mechanics and entropy arising from misunderstandings of probability theory. His work on entropy is particularly impressive and has been developed by his colleague, Walter T. Grandy, into a coherent and powerful theory of non-equilibrium statistical mechanics described in a recently published book Entropy and the Time Evolution of Macroscopic Systems. Jaynes had some of his own ideas about a neoclassical basis for quantum mechanics, but I am more particular to David Hestenes' Zitterbewegung approach, a modern version of Schroedinger's early ideas, described in the last paper here: Geometric Algebra in Quantum Mechanics. Hestenes is blatantly an admirer of Jaynes' work in probability theory.

I'll just note that others

I'll just note that others have argued that the strong free will theorem contains a mistake in the argument that stochastic models contradict SPIN, TWIN and MIN.

mistakes in Conway-Kochen?

And, here is another critique that is similar.

As nearly as I can tell, the critics have two kinds of objections:

1) The theorem is not novel.

2) The theorem does not apply to certain kinds of stochastic models.

I think that criticsm (1) is best answered by noting the minimalism of the three axioms and their significance for empiricism. This will help us understand why criticisms of type (2) are not valid:

SPIN and TWIN are falsifiable by macroscopic experiment and all attempts to test them have, instead, confirmed them. As Conway and Kochen informally put it, those axioms tell you which dots will light up on a screen that you can, in fact, physically build. They are uncontroversial, in that sense.

MIN, on the other hand, is apparently widely misunderstood. It amounts to asserting a condition which is, in the view of the authors, logically necessary to the conduct of empirical science: namely that two experimenters acting simultaneously but separately can, respectively and independently, choose sequences of random numbers from 1-33 and 1-40.

Criticisms of type (2) appear to attack what exactly is meant by "random" here so it is worth asking what each side means by "random".

Without loss of generality, both sides of the debate would agree that we can limit attention to methods of choosing where each number (1-33 and 1-40) is chosen with equal probability. Both sides would agree that the experimenters can (physically, macroscopically measurably) choose their sequence of numbers such that there is no observable correlation between the two sequences. Empirical observation can verify that the choices always behave, mathematically, such that each pair of possible choices always occurs with 1 in (33 * 40) probability.

Here is an example of a selection procedure to which both sides of the debate would agree: experimenters A and B each separately and simultaneously count the number of geiger counter clicks from two separate samples of radioactive material in a given period of time. From that, they choose their 1-33 and 1-40 numbers. Both parties to the debate would turn around and reject this selection procedure if, for example, someone experimentally produced a pair of radioactive samples and an empirically verifiable way to predict the number of decays per suitable unit of time of both samples with arbitrary precision - or to predict the future rate of one sample given knowledge of the past rate of the other. Neither party anticipates such a demonstration ever happening.

Both parties agree that, in principle, perhaps there is a Transcendental Constant which, even if we can not discover its value, fully determines the decay rates of our two samples. Or, more generally, fully determines the 1-33 and 1-40 choices.

So where is the debate, with all of this agreement?

The critics offer - AS IF Conway and Kochen objected - that they have models, or reasons to believe models exist, of relativistic (satisfying MIN) quantum mechanics (satisfying SPIN and TWIN) in which the Transcendental Constant is a real thing. It exists. It determines the choices 1-33 and 1-40. It's only that we can not know its value well enough to predict digits of it beyond those we have already observed. So, to hell with this stuff about "If the experimenters are free to choose....": they aren't, the Transcendental Constant can explain it all.

The critics offer some math to show that the Transcendental Constant might be, in some sense real. They highlight what they perceive as mistakes in Conway and Kochen's math - focusing on parts of the papers that Conway and Kochen do seem to gloss over a bit.

What the critics of Conway and Kochen appear to be missing is why the philosophy of science is so important these papers.

I don't speak for Conway and Kochen but I think it is safe to say this in reply to the critics:

"Your theory posits the existence of a Transcendental Constant whose successive digits we can observe, but not by any means predict and test. Scientifically speaking, the existence or non-existence of the Transcendental Constant is undecidable. It is not a scientific question. It is purely a matter of faith.

"Moreover, as we have shown, the logical structure of the question of the Transcendental Constant implies that it exists and determines all or does not exist and determines none of the actually measurable things: the choices from 1-33, 1-40, and the squared spins of the twinned particles. That is what the theorem proves."

The critics theories fail to make any testable prediction which would violate SPIN, TWIN, or MIN. As a matter of science, the theorem stands and can be restated this way: either the experimenters are not free to pick 1-33 and 1-40 in this experiment, or the TWINed fundamental particles are free to choose choose spin subject only to the constraint that they make mutually consistent choices.

Moreover, no extant theory makes any testable prediction that would decide which is the case.

Moreover, *neither* side of the debate anticipates any theory which would decide the question. When the critics offer theories that include the equivalent of a Transcental Constant, those theories are devised to make the unknown digits of that constant unpredictable.

Moreover, but less formally, we are talking about the empirically verified predicted behavior of the most fundamental elements of matter of which we are comprised. It is extremely plausible that if fundamental aspects of the universe have freedom, in the sense of the theorem, then so does human subjectivity.

Absent any demonstration that SPIN or TWIN are outright violated, or absent any plausible and testable reason to disbelieve in MIN: it's either all clockwork *but we can never know that as a scientific fact* or else it's not all clockwork and "free will" (or "free whimsy", if you prefer) is a fundamental property of the stuff of the universe.

Finally, the "clockwork" belief AND the alternative "particles are free" belief are both equally non-scientific, but there is still a logical distinction to be made, thanks to DesCartes, at least if we make the reasonable assumption that these quantum observables play a significant role in human cognition. If our cognition is sensitive to these quantum observables (and there is almost no denying that under any extant theory) and the values of those observables can not be accounted for by science, then it fails to follow that the scientific method has any significance or in any way leads towards truth about nature. Conversely, under the hypothesis that "free will" is a universal property of nature, the scientific method can still reliably increase our knowledge of the truth of nature. When we do science, we are already relying on a faith-based belief in "free will" in the sense of the theorem - so isn't it interesting that some of our most secure scientific results (SPIN, TWIN, and MIN) seem not only consistent with that free will, but to mandate it?

Trancendental constant?

The critics offer - AS IF Conway and Kochen objected - that they have models, or reasons to believe models exist, of relativistic (satisfying MIN) quantum mechanics (satisfying SPIN and TWIN) in which the Transcendental Constant is a real thing. It exists. It determines the choices 1-33 and 1-40.

If I understand you, I don't think that's what the critics are saying. They're saying that it's possible to violate the strict locality condition of MIN without allowing information flow that violates relativity. The paper naasking provided doesn't argue for superderminism, but I think it uses the wrong definition of MIN.

re: transcendtal constant

If I understand you, I don't think that's what the critics are saying. They're saying that it's possible to [without free will] violate the strict locality condition of MIN without allowing information flow that violates relativity.

Conway and Kochen already admit that, from the start.

The critics simply fail to offer a theory which makes any testable prediction that could falsify free will as an explanation for some aspects of particle and human behavior.

The transcendental constant

I should have been more specific. The paper naasking linked IMO goes astray by choosing 'parameter independence' as the formalization for the independence concept of MIN. It's inappropriate since their stochastic models still exhibit the non-locality that Conway and Kochen are attempting to prove inevitable.

With a fixed definition of MIN for stochastic models, I think they would be in agreement with the Free Will Theorem except would note that it only vacuously applies to them as they do not satisfy MIN.

But I disagree that they'd agree with the following characterization:

It determines the choices 1-33 and 1-40.

This sounds like superdeterminism, whereas my impression was that most of these theories just define non-local reactions by the particles in response to probing. Maybe that quote went over my head, though.

re The transcendental constant

Sorry, yes: "determines the choices" was not good language. A stochastic theory of the sort offered by critics can in principle determine that the two choices are correlated without determining the two choices. The rub is that the stochastic theories proposed offer no testable version of the correlation they propose and are no better than a theory based on the "semi-free will" of particles to choose their measured spins. The only way out for would-be critics is an empirical demonstration that SPIN or TWIN is outright false, or an empirical demonstration that we can make any empirical demonstration that puts MIN in plausible (empirical) doubt.

The mere existence of untestable, "god's eye perspective" models which require no free will is non-responsive to what TSFWT actually says.

Is that true?

A stochastic theory of the sort offered by critics can in principle determine that the two choices are correlated without determining the two choices.

This is still different from my understanding. The choices of the experimenters are driven by a complex process (that includes the brains of the experimenters) and I don't think anyone is predicting a correlation between them. Instead they predict a correlated response by the particles that is non-local (but with parameter independence so that no information can be sent with this scheme), whatever the experimenter's choices might be.

While it's certainly not possible to rule out correlation between the experimenters' choices, that's leaving the realm of science.

scientifically true, I think

Instead they predict a correlated response by the particles that is non-local (but with parameter independence so that no information can be sent with this scheme), whatever the experimenter's choices might be.

What experiment do they propose that could falsify that conclusion?



I see your point. I just didn't see anyone in the two linked responses arguing it.

(forget it)

removed this comment

EPR paradox?

[[As far as I can see, it does't add up, and Einstein was right.]]

Uh, have you looked at the EPR paradox?
Because experiments have shown that the 'spooky action at distance' do exist..

The technical connection....

...is via the fact that the semantics of both FRP and this quantum language form monoidal closed categories, so there are naturally tantalizing structural analogies. This analogy is either very deep, since monoidal categories show up in many diverse parts of mathematics, or it's very shallow, since almost everything we look at has monoidal structure -- but I'm never sure which! :)

Previously on LtU