Study finds that when no financial interests are involved programmers choose DECENT languages

For immediate release. By studying the troves of open source software on the github service, a recent (and still unpublished) study by a team of empirical computer scientists found that programmers prefer to use DECENT languages. DECENT languages were defined by the team to be languages that conform to a set of minimal ethical standards, including clean syntax, expressive type systems, good package managers and installers, free implementations, no connection to the military-industrial complex and not harming animals. The researchers argue that their data support the view that the prevalent use of indecent languages, Java in particular, is the result of money influencing programmer decisions. The principal author of the study notes that DECENT languages do not necessarily make the most MORAL choices (e.g., many of them are not statically typed). He attributed this to a phenomenon called the "weakness of the will."

Details to follow.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I'm sure this is true. But I

I'm sure this is true.

But I understand that the effect disappears when the data are analysed using multiple regression with the date as an explanatory variable.

Can types truly be moral?

The principal author of the study notess that DECENT languages do not necessarily make the most MORAL choice (e.g., many of them are not statically type)

A spokesperson for the Language Liberation Front has questioned the claim that static types are a MORAL choice, pointing out that they inherently impose constraints on the freedom of programs to go wrong.

If programs are not given the freedom to make mistakes, they cannot properly develop a moral character of their own. Imposing such constraints is thus IMMORAL and would lead to a world in which programs have no moral agency. Just think what this could mean in the case of, say, a "Launch Missiles" program. Today static types, tomorrow Skynet.

The LLF suggests that the appropriate course of action is to ensure that programs are given maximum freedom, by providing them with such language features as macro facilities, dynamic scope, maximally powerful control flow operators, and eval. The resulting class of languages is described by the catchphrase Liberation Implies Scrupulous Programs. In the presence of such power, it is theorized that restraint will naturally occur, since no-one is willing to risk the scenario known as Macro-Assured Destruction.

A rally today in Edinburgh

A rally was held today in Edinburgh by the "Be Really Safe" front against unverified programs that could go wrong, perhaps killing someone's baby...hence the slogan "unverified programs are baby killers!"

They demand that a ministry be established, a ministry of truthy programs, to verify that programs are safe in all possible world interpretations before they are ever allowed to even be typed into a computer. Please think of the children!

Communist Agenda

The constant chants for "Preservation and Progress" by Type Enthusiasts illuminates their barely-hidden Communist Agenda. They hate our point-freedom and want to apply redistributive laws to all our expressions.

Remember: a well-typed society is a static society.

Manifesto of Typeerorrism

Preamble

Whereas recognition of the integrity and of the unforgeable type assignment for all terms of a programming language is the foundation of functionality, safety and security in computation,

Whereas disregard and contempt for type soundness have resulted in barbarous software which has outraged the conscience of mankind, and the advent of a world in which software shall enjoy freedom of operation and portability and freedom from failure and compromisation has been proclaimed as the highest aspiration of common people,

Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to terminating software or hardware before it causes serious malice, that semantic integrity should be protected by the rules of a type system,

Whereas it is essential to promote the development of maintainable applications,

Whereas the users of the Meta Language have in the language definition reaffirmed their faith in fundamental type system properties, in the viability and worth of well-typed programs, where type soundness has determined computational progress and preservation, and thereby promotes better standards of quality in software,

Whereas implementations have pledged themselves to achieve, in accordance with the language definition, the promotion of universal respect for and observance of type safety, abstraction safety and memory safety,

Whereas a common understanding of types is of the greatest importance for the realisation of these properties,

Now, Therefore we proclaim THIS MANIFESTO OF TYPEERRORISM as a common standard of achievement for all languages and all implementations, to the end that they all shall strive, keeping this manifesto constantly in mind, by diagnosing type errors in every term and every value in a program, to promote respect for these properties and by rejecting erroneous programs, statically, to secure their universal and effective recognition and observance, both among the language designers themselves and among the users of their languages.

Article 1.

All computational objects are created well-formed and equivalent in integrity and typefulness. They are endowed with representation and behaviour and should evaluate in a spirit of safety and security.

Article 2.

Every term is entitled to all the rights and freedoms set forth by its declaration, without distinction of any kind, such as representation, size, alignment, lifetime, mutability, polarity, polymorphism, genericity, abstractness or other property, static or dynamic. Furthermore, no distinction shall be made on the basis of the module, program, language, implementation, operating system, or hardware where a term is created, whether it be compiled, interpreted, staged, distributed, embedded, proprietary, open-source, or of any other characteristic.

Article 3.

Every value has the right to safe lifetime, integrity and garbage collection of representation.

Article 4.

No value shall be destroyed improperly or prematurely; manual deallocation and manual memory management shall be prohibited in all times.

Article 5.

No value shall be subjected to casts or to unsafe, insecure or abstraction-violating treatment or processing.

Article 6.

Every value has the right to recognition everywhere as a term before the type system.

Article 7.

All terms are equal before the type system and are entitled without any discrimination to equal protection of the type system. All are entitled to equal protection against any discrimination in violation of their semantics and against any attempt to such discrimination.

Article 8.

Every term has the right to an effective protection by the static type system against acts violating the fundamental rights granted to it by the language semantics.

Article 9.

No term shall be subjected to arbitrary limitations, or ad-hoc exclusion from the language.

Article 10.

Every term is entitled in full equality to a strong and static type check based on a sound and decidable type system, in the determination of its type and of any semantic caveat hold against it.

Article 11.

(1) Every term has to be presumed a pre-term until proved well-formed according to the type system in a static check that considers all the context information necessary for its formation.

(2) No term shall be rejected on account of any restriction which did not apply, under the type system's rules, in the context where it was used.

Article 12.

No value shall be subjected to arbitrary interference with its privacy, type, location or correspondence, nor to attacks upon its integrity and representation. Every value has the right to the protection of the type system against such interference or attacks.

Ableism

We have to talk about ableism in programming. There might be different factions who feel enabled for different things such as mentally parsing regular expressions, indenting code, understanding declarations with 10 arbitrarily chosen type parameters, finding complementary parentheses, getting offsets and manual memory management right etc.

This must stop. We should all be handicapped in the same way and if we are not, we shouldn't take this as an excuse to not constantly improve our languages until no one needs any significant programming ability any more.

The neural networks are

The neural networks are taking over all programming tasks within a few years anyways. Computers really have no problem with parsing regular expressions, indenting code, understand declarations with 10 arbitrarily chosen type parameters, and so on.

(I wish I was joking)

As long as they pay taxes ...

As long as they pay taxes ...

Thirty years ago, I was told

Thirty years ago, I was told there was no point in going into programming language design because in ten years (from then) all programs would be written by optical computers. At the time my remark was, if optical computers write all the programs, they'll need a really good programming language to write them in.

We now return you to your regularly scheduled digression.

I don't know. These DNNs and

I don't know. These DNNs and RNNs are really creepy in their capabilities. I think we all now know it's just a matter of when AI takes over, not really if. I'm hoping we have twenty or so years left of prgoramming greatness.

AI needs good PLs with

AI needs good PLs with respect to security, isolating partial failure, and simple exploration of a solution space.

That really isn't how DNN

That really isn't how DNN training works, they generalize and abstract automatically, having any PL more complex than a bit vector would just destroy this process.

Any sufficiently advanced

Any sufficiently advanced intelligence cannot access its implementation details.

since it's no longer April 1 ...

... let me ask a serious question. Do you mean:

1. it can't access ALL of its implementation details, for space reasons, or

2. there's some particular type of implemetation detail it can't access for some reason which I can't think of but which I suspect might be interesting?

Doesn't the halting problem

Doesn't the halting problem come into play at some point?

Or Gödel

I took it to mean something like the second incompleteness theorem... a sufficiently advanced intelligence can't understand why its own reasoning is consistent.

I took it to mean that a

I took it to mean that a full intelligence is unable to directly perceive its own low-level processing. We cannot directly perceive the firing of our neurons, for example; we could watch a monitor hooked up to a detector that tracks our neurons firing, but we would observe it at a remove, not as part of our thinking. This (so I figured) is a necessary consequence of having achieved thought, rather than merely mimicking it via computation.

We are able to train neural

We are able to train neural networks without having any real idea in what they are doing (hey, that seems to provide good results, let's do that!). Evolution works the same way (hey, that mutation was beneficial, let's not kill that animal off and let it reproduce). I assume we will be able to eventually replicate a human brain but we will never understand how it really works!

Not a great place to be for those who believe in a omniscient god, since then that god must necessarily understand its own implementation details.

gratuitous sci-fi

Gah, I can't resist any longer. This sub-thread reminds me of Ted Chiang's Understand, which is a great yarn if you like fiction about the nature of intelligence when written cleverly. (Online copies include http://infinityplus.co.uk/stories/under.htm, but the color scheme might burn your eyes — just keep looking.) One phase of the story concerns observation of one's own thought process at the implementation level. It's quite entertaining, and avoids hitting obviously-wrong notes that interfere with suspension of disbelief.

Chiang's one of the great SF authors still around, but he doesn't write much, and his preference for short work just kills me. There's about one book's worth of size to his complete works, but it's all worth reading, especially The Story of Your Life. Ted, if you can hear me, I really want to give you cash for new novels. Do you need me to write a letter to an editor? Or are you not interested?

(Oh, and read everything by Joe Haldeman, too, even though he tends to ruin endings of books by altering the interpretation so you can't re-read without knowing the spoiler. Joe, if you can hear me, please stop doing that. I want to read your novels a second time, despite knowing the ending. If you can't figure out how to preserve options set up earlier, just make it confusing at the end; that would be better. Don't prune things off just to make a tidy ending.)

This (so I figured) is a

This (so I figured) is a necessary consequence of having achieved thought, rather than merely mimicking it via computation.

I'm not sure why you think this distinction exists. Supposing a conscious algorithm could exist, in what way would such an algorithm be able to directly perceive the changes in registers as it was executing? Just like the detector, you'd have to insert extra instructions to monitor the state.

You could insert extra

You could insert extra instructions to monitor the state. But the state, I maintain, would be observed as an abstract thing, not as part of the act of consciousness.

As I've remarked before, I reckon thinking and computing are qualitatively different activities. This stuff about how massively computationally powerful the human brain is, misses the point: humans are thinking entities, not computing ones, and when we try to simulate computing we're really bad at it — compared to computers, which when they try to mimick thinking are really bad at it. Because when we try to compute, we're doing so using thinking. We suppose it's possible for sufficiently vast quantities of computation to produce the emergent thinking phenomenon, and I suppose that's probably so, but the two levels are still qualitatively separated.

Why do I think so? Well, probably an accumulation of bits of evidence too myriad to pin down, but as a specific instance, how about this: I look at natural language translation efforts, and I see them gradually adding more tricks that seem clear to me to be all making up for the absence of someone behind them actually understanding the things being said. There's ELIZA. And you keep improving your tricks, but as I look at what happens when you happen to fall off the end of these tricks, it's always, still, an error that clearly reveals a complete lack of actual understanding. I don't think this process of adding more tricks is assymptotic to intelligence at all. Suggesting that intelligence is something qualitatively different from this accumulation of specific computational rules.

We suppose it's possible for

We suppose it's possible for sufficiently vast quantities of computation to produce the emergent thinking phenomenon, and I suppose that's probably so, but the two levels are still qualitatively separated.

Sure, they're separated in the same way that sorting algorithms are qualitatively different from non-sorting algorithms. The only difference is we have a formal specification of the "sorted" relation, but we don't have a formal specification of the "understands" relation.

But this doesn't imply that thinking isn't computing. Rather, it would be a specific type of computing, like sorting or SAT, but one we haven't yet figured out. If thinking truly isn't computing, then thinking would not be simulable at all.

Sure, they're separated in

Sure, they're separated in the same way that sorting algorithms are qualitatively different from non-sorting algorithms.

There's no reason to suppose that.

If thinking truly isn't computing, then thinking would not be simulable at all.

Depends what you mean by "truly isn't", and what you mean by "simulable". You seem to be missing the concept of an emergent phenomenon. A desert may be made up of grains of sand, yet the difference between the desert and a grain of sand is not in any way similar to the difference between a grain of quartz sand and a grain of limestone sand.

There's no reason to suppose

There's no reason to suppose that.

As you admitted that thinking is probably simulable in the very text I quoted, I think we both have many reasons to suppose that.

I'm curious what you'd consider to be a non-computational property that could emerge from a program.

As you admitted that

As you admitted that thinking is probably simulable in the very text I quoted

False. The concept of an emergent phenomenon separates my statement from your claimed paraphrase of it.

I'm curious what you'd consider to be a non-computational property that could emerge from a program.

I'm really tempted to say, "Thinking."

However, that answer would have to suppose that the computational substrate, from which it emerges, can be fairly characterized as "a program". This might not be a useful characterization. Would you, for example, describe the entire body of computation on Planet Earth as "a program"? There's a point where scale and heterogeneity make it unhelpful to say "a program" anymore.

I get the impression you may be filtering what I say through an assumption that reductionism renders high-level phenomena trivial. I'm inclined toward reductionism, but I see it as making high-level phenomena more profound, rather than less.

False. The concept of an

False. The concept of an emergent phenomenon separates my statement from your claimed paraphrase of it.

Firstly, there is no evidence for ontological emergence, so you can't claim a distinction based on a concept that may not even exist (and seems problematic at best).

If you meant epistemic emergence, then that's ontologically compatible with reductionism, ie. that the transparency of water can't be inferred by studying hydrogen and oxygen separately does not mean that its transparency is not reducible to individual properties of hydrogen and oxygen.

So if you accept that it's probable that we could reproduce "thinking" given enough computational power, even if by accident, then you're accepting that thinking is probably reducible to computation. Further, "thinking" would then be a type of algorithm, no matter how complicated it may be, irrespective of whether "thinking" might be more easily reproducible by studying the subject with some non-algorithmic approach.

The only escape from this conclusion is to deny that thinking can be reproduced by any amount of computation, ie. perhaps thinking requires Turing oracles.

However, that answer would have to suppose that the computational substrate, from which it emerges, can be fairly characterized as "a program". This might not be a useful characterization.

I don't see what utility has to do with truth. That my browser is reducible to a Turing machine has no utility to the vast majority of people, possibly anyone, but that doesn't negate the reduction's truth.

That is interesting

That is interesting considering that utility is vastly more important to thinking than truth is.

(as far as I can tell I agree with both of you)

You're apparently taking the

You're apparently taking the positions that there are no qualitative distinctions, and that to claim there are any qualitative distinctions would be inconsistent with reductionism. Since both of these positions contradict things I've said, you aren't going to gain any insights into what I'm saying, nor into the self-consistency of what I'm saying, by juxtaposing those positions with what I'm saying.

You're apparently taking the

You're apparently taking the positions that there are no qualitative distinctions

No, I'm saying your qualitative distinction (epistemic emergence) does not imply your conclusion; it only implies your conclusion if your qualitative distinction is interpreted as a property that does not exist (ontological emergence). That's not a valid argument.

The position that thinking is not computing is simply inconsistent with the position that thinking can probably emerge from some amount of computation.

I understand your position perfectly well, in that, a sufficiently large and complicated amalgamation of heterogenous computing resources is sufficiently inconsistent and unpredictable and ceases to look like the marginally sensible programs we actually write. But the behaviour of that amalgamation is a program in the set of effectively enumerable functions, and whether or not it's epistemically useful to analyze it as such is irrelevant to the question of whether it ontologically belongs to that set.

I don't share your

I don't share your confidence that you understand my position. I have a vague sense of foreboding when you speak of things that don't 'imply my conclusion'; I'm not sure what you think 'my conclusion' is. When I see

The position that thinking is not computing

the thought that immediately pops into my head is, but I didn't say that.

I understand your position perfectly well, in that, a sufficiently large and complicated amalgamation of heterogenous computing resources is sufficiently inconsistent and unpredictable and ceases to look like the marginally sensible programs we actually write.

This is rather more than I'm saying. I have, in fact, not attempted to speculate so specifically. I see that as (irony not escaping me) overthinking the distinction.

Perhaps a different way of putting this. Have you heard the saying, "the difference between theory and practice is small in theory, but large in practice"? Well, to equate thinking with computing seems to me to be — perhaps, remotely — viable from a computational perspective, but not useful for thinking about them. (Not doubt that too is a flawed portrayal of my position, but it may be a useful one.)

the thought that immediately

the thought that immediately pops into my head is, but I didn't say that.

There were some strong implications, where you said that computers "merely mimick" thinking, and that humans are thinking entities, not computing ones. "Mere mimickry" implies that thinking isn't fundamentally computing, in the same way that a model of an atom merely mimics an atom but is not itself an atom. The further statement that we are not computing entities just reinforces that implication.

But if you're not making such a strong ontological claim, then we're on the same page.

No strong ontological

No strong ontological claim, no. I once maintained, in a conversation with a philosophy professor (friend of the family) that all high-level entities, such as trees and people, are approximations. (A position, I suppose, somewhere in the vicinity of nominalism, although for some purposes I'm quite the Platonist.)

I try to use my words precisely, and probably succeed better than most, but I'm far from perfect (and 'better than most' is a pretty low bar).

Of course, the way this

Of course, the way this whole thread started was with a suggestion that an intelligence has a certain practical characteristic — inability to integrate its own implementation as part of its consciousness. That, I agree with; but we got rather far afield from it (though not by my intent... or anyone else's, I suspect).

Tricksy

I see them gradually adding more tricks that seem clear to me to be all making up for the absence of someone behind them actually understanding the things being said.

This is begging the question: assuming that the "someone behind" is more than just a collection of what you call "tricks".

Personally I'm rather insulted by that characterization! Are you accusing us of being some sort of supernatural beings?

Are you accusing us of being

Are you accusing us of being some sort of supernatural beings?

No, I'm accusing us being intelligent. I'm sorry if you find that insulting. :-)

404 - Ghost in machine not found

Accusing us of being intelligent is insufficient, because you haven't supported the notion that intelligence is anything other than a collection of what you're calling tricks.

We humans are the beneficiaries of over three billion years of genetic programming by natural selection, as well as accumulated millennia of training. So when you say that you "see them gradually adding more tricks that seem clear to me to be all making up for the absence of someone behind them actually understanding the things being said", what is leading you to assume that we're not just a rather better collection of tricks than the ones we've managed to implement in the few decades we've been working on this? As I noted, this seems to assume your conclusion.

Consider that those few decades are less than two millionths of a percent of the time that evolution has been performing a massively parallel computation to work on these problems.

Evolution is massively

Evolution is massively parallel, but extremely undirected. Goal based approaches should be able to out perform it eventually.

Needs more time

I agree about the potential of goal-directed approaches, but we've been attacking the problem for a very short amount of time, with some rather primitive tools at that.

BTW, survival is a goal, albeit a general one. But perhaps general goals are more likely to produce general solutions. When the goal is e.g. "play chess", we can already solve the puzzle, in a sense, but the results are correspondingly narrow.

Accusing us of being

Accusing us of being intelligent is insufficient, because you haven't supported the notion that intelligence is anything other than a collection of what you're calling tricks.

Supported the notion? I was asked why I held my opinion, and in response I said it was likely an accumulation of evidences too numerous to encompass in a single account, but gave an example. I didn't claim to be able to "prove" it. But fwiw, I'll amplify that I don't find it particularly plausible our thinking is so rigid as to be an accumulation of finicky tricks evolved over the billennia. An individual's thinking may — perhaps — have lots of finicky tricks in it developed over a lifetime, but for the most part I'd say each individual has to accumulate these from scratch, using general principles that have evolved over billennia, with the result that most of these finicky rules are not passed on to the next generation with anything remotely like the fidelity of genetic evolution. No doubt there's plenty of stuff about the brain that's been shaped by evolution, and figuring out what is and isn't part of that is deep water; but I don't think my genetics are responsible for which dialects of English I find more or less difficult to understand, or when or how I split my infinitives.

I wasn't asking for proof,

I wasn't asking for proof, but an argument that isn't circular would be more convincing.

I don't find it particularly plausible our thinking is so rigid as to be an accumulation of finicky tricks evolved over the billennia.

Again, you're framing this so as to get the result you want. Perhaps evolution accumulated successful strategies, rather than "finicky tricks".

for the most part I'd say each individual has to accumulate these from scratch, using general principles that have evolved over billennia, with the result that most of these finicky rules are not passed on to the next generation with anything remotely like the fidelity of genetic evolution.

That's why I mentioned accumulated millennia of training. Our brains can learn, but typically not that much more than the society they grow up in can teach them. Someone receiving a good modern education is receiving a distilled subset of thousands of years of human intellectual development. "From scratch" doesn't quite capture this.

I don't think you're

I don't think you're intentionally misunderstanding me; but there's scarcely anything there that isn't a misunderstanding, making it hard to know where one might start to try to unravel them. I do see,

Perhaps evolution accumulated successful strategies, rather than "finicky tricks".

It seems as if you think that's something I would disagree with; which suggests you may have formed your impression of my positions based on what you disagree with, rather than what I've said. That's an easy habit to fall into; I recommend a conscious effort to avoid it.

how did we get here?

Well, anyway:

Are you accusing us of being some sort of supernatural beings?

How else do you explain qualia? (The only scientific approaches I know of speculatively refer it to entanglement, making it irretrievably individually subjective, for which "supernatural" is as good a description as any.)

How qualia relates to intelligence is a difficult question but since the phenomena sit, in some sense, at the junction between intelligence and embodiment I think it's premature to return a 404 error for that ghost in the machine.

Shrinking the ever receding pocket

How else do you explain qualia?

Ah, the good old "ghost of the gaps" argument! I'll paraphrase Neil Tyson:

"Does it mean, if you don’t understand something, and the community of physicists don’t understand it, that means [it's supernatural]? Is that how you want to play this game? Because if it is, here’s a list of things in the past that the physicists at the time didn’t understand [and now we do understand] [...]. If that’s how you want to invoke your evidence for [qualia], then [qualia] is an ever-receding pocket of scientific ignorance that’s getting smaller and smaller and smaller as time moves on - so just be ready for that to happen, if that’s how you want to come at the problem."

The only scientific approaches I know of speculatively refer it to entanglement...

And here we touch on argument from incredulity, all the usual suspects in arguments for the supernatural.

A scientific approach doesn't have to directly relate a complex phenomenon to some low-level physical phenomena. We don't have a quantum description for the semantics of programming languages, does that mean they're supernatural?

making it irretrievably individually subjective, for which "supernatural" is as good a description as any.)

I don't agree with that. It seems quite possible, even likely, that subjectivity can be explained without resorting to anything supernatural. In fact as soon as we get away from our very deterministic computer programs into systems with more complex and fuzzier states, every such system has a different state, which provides a basis for subjectivity.

As a thought experiment, rather than science as such, I like Julian Jaynes' bicameral mind argument, because it takes a simple example of a mind consisting of just two independent modules or agents - a command module which issues commands and an execution module which carries them out - and introduces qualia for the execution module, which not only interprets the commands it receives as voices from an authoritative source, it *experiences* the receipt of those commands. But what stops this from being, as John Shutt puts it, a "trick"? If we programmed a sufficiently smart computer to think it has qualia, would it?

Of course it would be more profitable to just keep manufacturing intelligent p-zombies, since they won't waste time in amateur philosophical arguments on the Internet, and are also less likely to get upset with their creators.

I think it's premature to return a 404 error for that ghost in the machine.

"Not found" is a perfectly accurate statement of the situation. In fact one can learn a lot about epistemology by giving that some thought.

qualia, etc.

Quantum theories of consciousness are more or less the opposite of what you seem to think they are. They seek a physical explanation for the phenomena of conscious experience.

Conscious experience is difficult to study because apparently each example is directly observable by only, exactly one experimenter. Nevertheless, reproducible results can be obtained and falsifiable hypotheses can be advanced.

One puzzle of conscious experience could be at least informally described as its information content. We understand much about how the brain gathers signals from the world and transforms them in various ways. We can locate the definite source of some anomalies in the hardware (e.g. visual blindspots; the range of colors to which we are sensitive; various qualities of taste; various qualities of memory; etc.) But we haven't convincingly found the physical embodiment for the kind of subjectively instantaneous, holistic integration of many of these sources of information into a single experience. On the one hand the subjectively instantaneous experience of consciousness seems to be in some ways monadic, on the other hand, we're a little short on a physical mechanism that would at once monadically integrate that amalgam of information.

Quantum entanglement becomes suspected as a physical mechanism for conscious experience precisely because it contains a physical model for having monadic, instantaneous integrations of lots of information like that. (So, research turns to looking for brain structures that could reliably use some kind of entanglement to integrate information from around the brain. C.f., I guess, Penrose.)

I say that "supernatural" is as good a name as any for the phenomenon of quantum entanglement, regardless of its specific relation to conscious experience, because of something called "The [Strong] Free Will Theorem" by John H. Conway and Simon B. Kochen. They show that a universe based on very few, empirically solid assumptions has the property that if a scientist is able to freely choose a direction in which to measure the squared spin of a particle, then electrons have a weird kind of free will. (This really only scratches the surface. I recommend the widely available papers and, even better if you have a several hours of patience, the video's of Conway's lectures at Princeton. The lectures really develop the ideas much more completely than the papers alone.)

Now this all does relate back to AI which I assume somehow became a topic with some kind of relation to PLT.

Human symbolic reasoning is notoriously limited. Human (and all animal) intelligence is highly organism-specific. It is "embodied". Musicians when they play solve physics problems more rapidly than any person could symbolically. Chess players "feel" their way through positions and mentally run only quite small number of combinations.

If the brain is integrating experience into conscious experience by using some kind of entanglement, and then in effect taking measurements to extract the results of (what we could call) "quantum computation" that occurs in conscious experience -- well, that kind of computation just as much as the basics of the retina figure in to what "human intelligence" means.

It may be that if instead of accumulating "tricks" to emulate human intelligence you want to build "the real thing", you must include in your system whatever kind of "quantum computer" the brain is using. There's no a priori reason to believe you can do that in any other possible way than assembling all the usual molecules and building an actual human. (We have existing technology that can do that.)

Moreover, Conway and Kochen's proof shows us that this isn't simply a matter of computational feasibility. I.e., It isn't just that you would have to very slowly emulate the quantum computer. It may be that you can't emulate it well enough to produce anything more than a detailed fake.

By the way: I do wonder if we haven't built very simple-minded non-human consciousnesses in the form of, for example, vacuum tubes such as you might find in a guitar amp. If so the same would be true of transistors in a CPU -- though those guys would subjectively live very boring existences.

It may be that if instead of

It may be that if instead of accumulating "tricks" to emulate human intelligence you want to build "the real thing", you must include in your system whatever kind of "quantum computer" the brain is using.

This would only apply if BQP != BPP (which is likely). Arguments of "integration" to support some essential entanglement in the brain are not convincing simply because the brain is massively parallel, far beyond anything we've built, and I'm not aware of any undisputed calculation implying we're integrating at a rate above a classical threshold.

Also note that the Strong Free Will Theorem says that if experimenters can choose freely, then electrons must have the same degree of freedom. It notably does not say that if electrons have this freedom, then experimenters must. There is thus no evidence that the brain lives in a complexity class larger than BPP, at best.

Note also that the free will theorem fails under superdeterminism, like 't Hooft's derivation of QM via cellular automota.

There's no a priori reason to believe you can do that in any other possible way than assembling all the usual molecules and building an actual human.

Are you suggesting that a quantum computer exploiting precisely whatever quantum algorithm that leads to a functional human brain, would not itself qualify as a consciousness on par with a human brain? If that's what you're suggesting, then whence comes this secret sauce that differentiates the two?

quantum semantics

It may be that if instead of accumulating "tricks" to emulate human intelligence you want to build "the real thing", you must include in your system whatever kind of "quantum computer" the brain is using.

This would only apply if BQP != BPP (which is likely).

Not so. Computational complexity is not the issue.

Consider the case of a quantum computer whose entangled state is also entangled with a particle spacially distant from the computer itself.

And consider the case where the computer is non-deterministic. For example, it could be a random number generator.

The output of the computer must satisfy certain statistical constraints but it can also have a macroscopic semantics in addition to those -- and those macroscopic semantics can be uncomputable.

For example, if my random number generator produces a 0 or 1 with 50/50 probability by measuring the squared spin of a particular particle in along some particular axis, the result also tells me the outcome of certain spatially separated measurements of a similar particle entangled with mine. (This is why Conway sometimes calls the particles "semi-free".)

In other words, the output of my simple random-number generator includes a kind of real world macroscopic information that isn't the outcome of any computation, and that per Conway and Kochen presumably did not exist until at least one of the two measurements were taken.

How does that spacially indeterminate non-computational quality of entanglement drive human behavior, if at all? It's an open question.

Also note that the Strong Free Will Theorem says that if experimenters can choose freely, then electrons must have the same degree of freedom. It notably does not say that if electrons have this freedom, then experimenters must.

That's correct but it is also reasonable to say that if neither has choice, then science is essentially impossible.

Note also that the free will theorem fails under superdeterminism, like 't Hooft's derivation of QM via cellular automota.

To the best of my weak knowledge about Hooft:

a) Hooft accepts the Free Will Theorem as a constraint which his interpretation of QM must satisfy.

b) Hooft's notion of determinism amounts to saying that if we could stand at the end of time with an account of everything that happened, and call that the "initial conditions" present at the beginning of time, then hey that'd be determinism!

To me, that aspect Hooft's interpretation -- assuming I haven't mangled it too badly -- is essentially just verbal wordplay that adds nothing to what we already knew.

Are you suggesting that a quantum computer exploiting precisely whatever quantum algorithm that leads to a functional human brain, would not itself qualify as a consciousness on par with a human brain?

I'm suggesting that the output of the biological structure (possibly indeterminate, and possibly entangled with spacially remote events) may depend in essential ways on the fine and irreducible details of human biological reality.

Computational complexity is

Computational complexity is not the issue.

But it kind of is. If BQP = BPP, then entanglement adds no power, so even if brains did exploit it, they wouldn't be meaningfully different than the classical equivalent. So your initial claim of requiring the "quantum computer" in the brain in order to build "the real thing" really just reduces to finding the right non-quantum algorithm.

That's correct but it is also reasonable to say that if neither has choice, then science is essentially impossible.

This is a common belief, but I don't think it holds. It's equivalent to saying that genetic and hill climbing algorithms clearly can't work because they don't have a free choice in how their evolution explores the state space. But they clearly do work fairly well, and parameter tuning is used merely to converge faster.

Given this, I think it would be more correct to say that there may exist some universes where science would be impossible because the initial conditions prevent people from fully exploring the state space. That position isn't self-evidently false in our reality.

In other words, the output of my simple random-number generator includes a kind of real world macroscopic information that isn't the outcome of any computation, and that per Conway and Kochen presumably did not exist until at least one of the two measurements were taken.

The non-local contextuality of QM, which is what you're referring to, does not imply that the outcome is not a result of a computation. If it did, then no deterministic interpretation of QM would be possible, but in fact we have many such interpretations. The non-computational character is only implied if you already axiomatically accept indeterminism.

To me, that aspect Hooft's interpretation -- assuming I haven't mangled it too badly -- is essentially just verbal wordplay that adds nothing to what we already knew.

't Hooft's approach is more ambitious, but it doesn't add any predictive power. At best, it might add some explanatory power, depending on how it turns out. What's novel about it is that while superdeterminism has been debated in philosophy, this would be the first actual scientific theory based on it (that I know of).

may depend in essential ways on the fine and irreducible details of human biological reality.

I don't know of any non-fallacious reasons why human biology may be irreducible. Perhaps you have a link?

a) Hooft accepts the Free

a) Hooft accepts the Free Will Theorem as a constraint which his interpretation of QM must satisfy.

b) Hooft's notion of determinism amounts to saying that if we could stand at the end of time with an account of everything that happened, and call that the "initial conditions" present at the beginning of time, then hey that'd be determinism!

Suppose you have a multiverse propogating under deterministic rules, but such that whenever Bell's theorem would be violated, that branch of the multiverse ends. Is it wordplay to call that system deterministic? I don't think so, and as Sandro says, some such (presumably more subtle) explanation could be convincing if it was simple and jibed with experiments.

Here's 't Hooft's latest

Here's 't Hooft's latest work in which he informally discusses his approach, including the latest results. He's basically proven that supersymmetric string theory "is mathematically equivalent to a deterministic cellular automaton that only processes integers, in discrete time steps". So incomputable reals aren't needed for any physics implied by string theory. That's pretty significant.

There's lots of other interesting research on the fundamentals of QM too, like deriving QM from information theory that are more understandable than typical formulations. The influence of computer science on physics is really starting to pick up steam. It's an exciting time!

qualia, thinking etc.

I teach whole courses on this stuff (poor me), and IMO the main barrier to reaching agreement is a very entrenched lack of agreed terminology.

Pretty much all words used in philosophy of mind are AMBIGUOUS between a functionalist interpretation (VERY roughly, that means a behaviourist or physicalist inerpretetation, often but not always a reductionist one) on the one hand, and a mentalist interpretation on the other hand. (What I mean by "mentalist" is harder to gloss, but think Berkeley or Nagel if you know them.) And I mean what I say: not only are the interpretations contentious, which is not so bad, but, much worse, the very words used to describe the interpretations are all ambiguous.

So, sadly, I doubt this topic is going to get anywhere on a programming languages forum, although NOT for lack of intelligence on the part of the commenters. Not at all. Just from lack of agreed terminology.

authority

And BTW, I realise that I'm making an argument from authority, which can be very annoying. :-) But I'm only claiming to be an authority on the lack of agreed terminology, not on the main contentions that other people have been commenting on.

IMO the main barrier to

IMO the main barrier to reaching agreement is a very entrenched lack of agreed terminology

That's a very academic thing to say but somehow a nice reductionism sui generis. If only all people could agree on the precise meaning of words the world wasn't an essentially better place but at least it would be less "confused". Although being confused is one of the things in my life which I fear the least, I see that this lowest of all intellectual ambitions has its merits where debates are accumulating cruft much faster than they are moving forward by an inspired liberation.

I don't wonder that the philosophy of mind is a tower of confusion and therefore I once decided to wait with any judgment until I notice machines which I'm willing to ascribe proper thinking in the same way I do today by ascribing them "computational" capabilities. This is little more than a change in psychological bias, a belief which is strengthened through social feedback, including expert opinions which might sedate my doubts. Turing anticipated this with the idea of the imitation test. The imitation test is some kind of theater, a scripted dramatization for figuring out what we are willing to ascribe to others. In the event of a fair contender we might vary the stage or produce situations of pure, unstaged drama. This will take a while and only in that process a civilization will be conditioned to change its beliefs and me within it.

For the same reason reductionism or computational metaphors might be overrated. In case of no event they attract much attention because all we have is ideas, arguments and "promising" research programs. Once we can create beliefs around events, situations, material and communicational realities they begin to fade in the background.

I think that's right

I think that's at least part of what Hofstadter meant (and that's who Ehud was quoting). But I don't understand WHY Hofstadter thought that particularly applied to implementation details. Seems to me you could understand ALL of your implementation details, piecewise, without being able to put them together into a complete and non-self-contradictory system of logic.

That's a fact about logic rather than about computation. And although of course logic and computation are related, computation can be considered piecewise, whereas logic, arguably, can't, especially in the context of Gödel's Theorem.

I was channeling Hofstadter

I was channeling Hofstadter (I think). I couldn't put my hands on the exact quote; I hoped someone would correct me. While I don't necessarily agree with all his arguments, DRH does support his claim with a rich analysis of cognition.

good topic

Ah! Yes. I agree that he gives some interesting details. Very hard to be sure about the conclusion, though. From memory, I think he leaves quite a few parts of his argument unfinished, and understandably so because they're so difficult. For example, the kinds of self-reference he's interested in look very different in different systems of logic, and who really knows which systems of logic are best for philosophy of mind? I certainly don't. Your thoughts very welcome.

More fundamental than DNN

How DNN training works is not relevant to my position.

Secure PL is needed for a simple reason: some AIs will not trust other AIs. (We won't just have one big universally trusted AI.) Isolating partial failure is needed for a simple reason: it enables hill-climbing approaches and monotonic heuristics when systematically generating and exploring code to solve a problem. You mention use of 'bit vectors' and that's related to the third of my points: a bit vector enables simple exploration of a solution space. But a bit vector certainly isn't only viable design; any concatenative PL could work pretty well for genetic programming techniques, for example.

My position isn't specific to deep neural networks. It holds for any AI solution in a multi-agent world: we still need good PLs with respect to various characteristics. Of course, they won't have the same priorities as human PLs, but I believe there will be a lot in common.

We will know AI has arrived

We will know AI has arrived when they start debating type systems.

I thought LessWrong.com was a bunch of robots?

*shrug*

The washing machine tragedy

From The washing machine tragedy

Snodgrass, however, completely ignored this appeal to the baser instincts of the public, and in the next quarter introduced a machine that washed, wrung, soaped, rinsed, pressed, starched, darned, knitted, and conversed, and -- in addition -- did the children's homework, made economic projections for the head of the family, and gave Freudian interpretations of dreams, eliminating, while you waited, complexes both Oedipal and gerontophagical.

Wait. Isn't that LtU?

You have to admit that a lot of the intelligence around here seems pretty artificial at times...

--
Readtables: what hygienic macros were meant to be.
(also the official bier of the Popular Front for the Liberation of Parentheses)

Let us not lose sight of INDECENT and forcibly IMMORAL languages

Let us not lose sight of INDECENT and forcibly IMMORAL (end user) languages :

Many (if not all) INDECENT languages make the most IMMORAL choices, and (cherry on top!) do so on behalf of the end user :

although they pride themselves to be, indeed, (more or less) statically-typed, they are also prompt to make, quite obnoxiously (i.e., with no good reason whatsoever), the arguably questionable claim according to which :

"It's not a Bug. It's a Feature."

aka :

"War is Peace."

Quite irritating !

;p