PL's hotness challenge

Blog post on HN. Only the intro is related to PL:

I’m trying to get into neural networks. There have been a couple big breakthroughs in the field in recent years and suddenly my side project of messing around with programming languages seemed short sighted. It almost seems like we’ll have real AI soon and I want to be working on that. While making my first couple steps into the field it’s hard to keep that enthusiasm. A lot of the field is still kinda handwavy where when you want to find out why something is used the way it’s used, the only answer you can get is “because it works like this and it doesn’t work if we change it.”

Putting my ear to the ground, competition from ML has become more and more common not just in PL, but also in systems, I know many researchers who are re-purposing their skills right now while it has become more difficult to find grad students/interns.

So, is this something we should be worried about? What could happen to make PL more competitive in terms of exciting results/mindshare?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What could happen to make PL more competitive in terms of exciti

Better title for the discussion, me thinks...

SUre.

SUre.

Neural networks are not a panacea.

As someone who studied neural networks, not a lot has changed since Kohonen's associative memories from the 1970s. There have been a few interesting things like sprase coding, but again I am not sure if there wasn't already something like that in Kohonen's self organising maps. What has happened is that by training one layer at a time, we have been able to achieve something much closer to the theoretical potential of multi-layer networks (and this new way of training multi-layer networks is what people call deep-learning).

So we now have hierarchical classifiers that can self organise and actually work. There are many applications for this technology, but the code still needs to be written in a language. As most of this code is matrix and vector operations, the PL angle might be to develop a language to facilitate writing efficient and simple to read NN code, and to allow design parameters to be experimented with easily.

In any case, where we have an exact algorithm for doing something, it is much more efficient to use this. The Alpha-go victory benefitted from deep-learning but also used classical AI techniques, like the upper-confidence-bound tree.

How successful a person can

How successful a person can be in a field depends on how much fruit there is to pick but also on how much competition there is. I think that compared to the amount of fruit, deep learning is going to be very overcrowded soon. Deep learning experts also say that the amount of fruit is overstated: they think that general AI is still very far off, Kurzweil notwithstanding. If they are right then the low hanging fruit is going to be gone soon, and then the hype will die down with it, leading to AI winter 2.0. If Kurzweil is right then it is of course the most important thing you can possibly work on.

Surely, but it is already

Surely, but it is already sucking away a lot of mindshare/talent. Even if there is still room for PL afterwards, the size of the field will be reduced while their will be a definite stall in the talent pipeline.

PL gods got top jobs in SV

PL gods got top jobs in SV before AI gods, right?

But they are now working on

But they are now working on AI....

Really? Names? The people I

Really? Names? The people I have in mind are still working on languages.

Jeff Dean and Sanjay

Jeff Dean and Sanjay Ghemawat are both PL PhDs. Heck, look at any big Google project in systems or ML, and more often than not, I'll see a lot of PL people names I ran into in grad school.

Behind every great man

For every data scientist, there is a data engineer. Not worried ;-)

Sure, it's all about

Sure, it's all about perception and the talent pipeline, not the reality.

Scale is fun & hard

There's a lot of enthusiasm about runnings things at crazy scale, including for powering AI, so at least that side of PL isn't lacking in attention. There's a broken marketing component where the language design is being done by the systems + databases designers, and PL people come in as cleanup (as usual), but that's a known & long-running problem :)

Right, but if you were a

Right, but if you were a new graduate student, or a senior undergrad student looking to take a non core elective, what would you choose?

Anecdote

Though I suppose this is a bit tired, since I've mentioned it before on LtU (possibly even more than once). As an undergrad in the mid-1980s, I mentioned to a classmate that I was serious about programming language design, as in, focus of academic career. He told me there was no point to that speciality, because in a few years (I forget if it was ten or twenty he mentioned, moot point since that was thirty years ago) all computers would be optical computers that program themselves. I told him, if in the future computers wrote programs, then they'd want a really good programming language to write in.

Ditto when Engelbart did is

Ditto when Engelbart did his mother of all demos, many of the CS academics said "what is the point? Full AI will make all this work obsolete in a few years", that was 1968.

But that isn't the point: surely PL will be important for a few more decades at least, what can we do to show hope/opportunity in the field now? You can't expect students and enthusiasts who are choosing specializations to get it on their own.

Not my point either

Underestimating the time till full AI wasn't my point, either. My classmate figured that once computers program themselves there would be no need for PLs. I conjectured that any sapience capable of the fully general task of programming would be similar enough to the human mind that it would want to use a PL. I'm still not convinced otherwise, thirty years later.

True

But surely the great AI mind will just invent that better programming language, too. That's at least five years away, though.

also, jetpacks

I've wondered that it seems so often taken as given we'll build AIs smarter than we are, when it's yet to be demonstrated we can build ones anywhere near as smart as we are — for general purposes; special-purpose problem-solvers being quite a different sort of problem. Though that might be the explanation, right there: we don't know what to do with the difference between special-purpose problem-solving and general-purpose intelligence, so we ignore it. Or maybe it's just healthy paranoia; like cautionary fiction about supersmart AIs taking over the world, when history doesn't exactly give unmixed signals that the smartest people end up in charge.

I don't think AI will evolve

I don't think AI will evolve to a similar form of the human mind, sounds very creationist to me. AI will probably just create languages on the fly for the problems/learning at hand, since they don't have to go from computer-to-computer (imagine if we were programming our reasoning directly, rather than trans-coding it for a computer to follow), and whatever languages they come up with will be completely opaque to us like NNs are already.

But that is far in the future.

smartness parameter

Some people suppose sapience is a property that applies to vast swaths of algorithms, one family of which happens to be us. This naturally lends itself to the further supposition that different algorithms have different levels of sapience with no particular limit on how "smart" a sapience can be; and since we're used to optimizing algorithms, presumably once we know how to build a sapient algorithm we can just tune it to be arbitrarily smart. I don't buy into that, though. I currently favor the conjecture that sapience is a peculiar property only arising within rather limited bounds; in which case there may be fundamental limits/trade-offs in how "smart" it's possible for a sapience to be, and instead of being a simple matter of setting a smartness parameter it may be deucedly tricky to balance well enough to work properly at all.

There is no algorithm for

There is no algorithm for sapience, just like there isn't one for speech recognition. Instead, it's just a much of pattern matching, signal filtering done on a large scale. We can't tune these directly, because they actually aren't algorithms at all, but something else completely.

Algorithms are just the wrong way to think about it.

Not a matter of scale

Ah, the critical-mass theory of sapience: just keep making it bigger and it'll become sapient. I've discussed elsewhere why I figure that theory would be attractive atm despite lack of evidence for it (blogged, yonder).

In the earlier comment I didn't say there was an algorithm for sapience, actually; though there may be some breakdown here due to differing understandings of what "algorithm" means. Particularly, how rigidly limited a way-of-doing-things has to be to qualify for the word. In the earlier comment I was using the term pretty loosely. Granted, my conjectured "trick" to sapience likely seems more algorithm-like than the "just make it bigger" view.

This is true.

You're right that it isn't purely a matter of scale.

Scale is necessary for intelligence. When I compare the intelligence of programs or systems to the intelligence of creatures, that's a scale measurement of complexity and the extent to which its application is apposite to purpose. But it's possible for something to be very smart on that scale without being sapient. Conversely, while some intelligence is required (an extravagant amount by computer-hardware standards) it's possible for something not nearly as smart as us to be fully conscious.

We do not doubt that a cat or a dog or a hamster is sentient - that it is self-aware, has emotion, is capable of pain and pleasure and fear and joy and subjective experience/qualia and so on - even when it has not the power of symbolic thought or symbolic expression. (I probably spend way too much time thinking about what goes on inside cat brains).

Right now, I see the best artificial-intelligence systems we've produced being almost as smart as a gecko. In some cases it's even a specifically symbolic intelligence, which is different in kind but not scale from what geckos have - but so far they are less sentient than an earthworm.

Consciousness does not result from scale. It does not result from intelligence. It does not even result from symbolic intelligence. With a strong symbolic intelligence and nothing else, you'd get a smart thing that had nothing to say, no interest in saying it, and no awareness of itself or its experience. It would have no need to refine its abilities, and no desire to do so. In fact, no desire to do anything at all. And it would be about as conscious as an earthworm.

'smart' is hard to define

We likely disagree a bit about what sapience is. "Symbolic thought" is another name sometimes used for what I mean by sapience, though of course it's not what's meant by "symbolic" in the term "symbolic AI". A cat or dog or hamster doesn't have it (though I agree they have properties that allow us to empathize with them; there's another dimension of interest there, and one that could matter hugely for the future of civilization). I'm currently inclined to the theory that while sufficient scale is needed to support sapience, too much scale can also prevent it, so that in a really vast system (say, the entire internet) a sapience would always be a component, never the whole.

[edit: Well... "symbolic thought" is another name for more-or-less what I mean by sapience; I'm not sure even that is quite the same thing, perhaps because we can't say exactly what any of these terms mean at our current level of understanding.]

So far, the only animals

So far, the only animals that we recognize to have symbolic thought capabilities besides humans are parrots. See http://www.livescience.com/22178-parrots-reason-three-year-olds.html. But I guess the only reason we can recognize is because parrots mimic, whereas other animals communicate in ways completely foreign to us.

Speech, language, objects, functions...it's all related.

There's a long tradition of

There's a long tradition of bad news coverage of scientific studies of animals. As well, of course, as bad news coverage of other scientific studies, but the sort of badness varies with the sort of science; ones about animals are generally sensationalized as such-and-such-animals are as smart as X-year-old human children. The actual scientific results are nothing like the sensationalism.

I get the feeling that you

I get the feeling that you think humans are incredibly unique in their sentience, which you see as an obviously binary quality. Those who believe in such human exceptionalism will see any study on animal intelligence as a fraud, since it goes against their faith/beliefs. Fair enough, but I'm only interested in science, not religion.

Downvote

This post was terrible. Incoming tomatoes.

Those are some rather

Those are some rather extraordinary extrapolations from my remark that the news media doesn't do a good job of covering scientific studies.

It's not just this remark,

It's not just this remark, you've made it clear that you believe AI is limited in general and human intelligence is exceptional. There are plenty of scientists out there doing experiments, getting results, with animals and such. You can read about parrots with equivalent human 3 year old abilities, and wonder what else we are missing simply given lack of communication (crows, dolphins, elephants). It's not very surprising that animals can do things actually, it would be much more weird if we were exceptional rather than just more capable/sentient. Then ya, human intelligence probably in computer form is just a matter of scale and some techniques on topology. Of course, that scale might still be far away.

But really, we don't even need to argue about this, especially here. That others see enough hope in AI to divert a lot of resources and mindshare (including those that would be spent on PL) to it is good/bad enough.

But human intelligence evidently is exceptional.

Does it seem at all weird to you that, in the three point seven billion years of life on this planet, we are the first species that apparently does what we do?

Does it seem even weirder that when we look out into the void we can detect absolutely no evidence of any species that does similar things, in billions and billions of places where such species could be?

Something went in a very strange direction on this planet. There's absolutely no evidence that human brains do anything that can't be explained by the theory that they are made of atoms acting according to the laws of physics, and on small organizational scales (neurons, columns, synapses, sensory integration structures, etc) it seems to be working with the same structure as the brains of all the other creatures on this planet.

But something is for darn sure exceptional, rare, strange, or downright peculiar about us as a species, or else the galaxy would be filled with radio signals and our geological strata with the evidence of previous intelligent species and their civilizations.

Every species is

Every species is "exceptional" as it evolves to fit is niche, but we are still basically the same as other mammals, genetically (99%?). We are just adapted in a slightly different way, which means if when we ever reach the point of a dog or even a mouse brain, we are 99.9% of the way their to a human brain.

If aliens visited our planet, they might guess the dominant species were ants just by their biomass readings. And without our limited perspective, they would be unable to distinguish our impact from theirs, and would probably just take our chatter as noise, like the noise of the ants.

Human, or even mammalian, intelligence also is not "better" than other forms of intelligence, nor that is even necessary to create complex systems. Again, ants are very simple/stupid individually yet colonies of millions of ants are very clever. If we are building AI at the level of a singularity, we don't even have to build human-level sentience, ant-level would be more than enough in aggregate assuming collective behavior was sussed out.

Yeah, but humans are bizarre.

Sure, every species is exceptional in that it evolves into its niche. But humans are WEIRD.

But as far as we can tell, in the whole of the GALAXY, we're the only species that has evolved into THIS niche. Why should we be the ONLY ones to develop the kind of intelligence that invents radio transmitters?

We think it's four hundred thousand years since anatomically modern humans. Only about forty thousand since serious evidence of our particular kind of intelligence is found in the archaeological record. It's been ten thousand times that long since animals capable of supporting our kind of brains have existed. Why the heck, in a thousand times the length of time our species has even existed and ten thousand times the length of time this type of intelligence has been fully developed in our species, are we the first to develop this particular type of intelligence?

There have been thousands of species of browsers and grazers and chase predators and herd animals and pride hunters and pack hunters and ambush hunters and fructivores and insectivores and scavengers and so on. Every OTHER niche we can think of has many representatives. Even very specific hyperspecialized adaptations like sabertoothed cats have evolved independently dozens of times. But this type of intelligence? OUR niche? We're the only ones ever. And not just on this planet. As far as we can tell we're the only ones in the galaxy to have ever invented radio. And we have no idea why.

We barely know about life in

We barely know about life in the galaxy and universe. For all we know, life in general is rare, and given the addition of a huge time to a huge space dimension, it is completely possible that its just Earth right now for the entire observable universe.

There are many kinds of intelligences, why human-like beings weren't evolved before shouldn't be that surprising, like the odds of shuffling the cards the same way twice. And anyways, we had plenty of competitors in our own niche, even intelligent cousins, but we just out-competed them and killed them off.

It is also completely possible for an advanced society of different animals to have existed long before us on earth, given the time involved, it could have come and gone, all traces wiped away, or any traces remaining looking like noise to us. Say if it happened in the ocean...they could have gone to space, invented communication...developed completely different tech trees. But since we don't know what look for, we are self centered on our own perspective, we will simply never recognize what once was.

Mass extinction events

We assume that mass extinction events in the fossil record are the result of external factors (meteors, climate change etc). But that is only an assumption. The main impact that we have had as a species is mass extinction and climate change. I wonder what our record would look like a million years from now if all technological artifacts have been eaten by time?

We'd be pretty obvious.

Mass extinction, climate change, pollution, and the very obvious depletion of mineral resources. The distribution of coal beds, salt beds, and quarries all show VERY obvious signs of our passage, and those signs will still be very obvious eighty million years from now. And our atmospheric pollution is going to be a very detectable "bright line" in the fossil strata.

Oil fields, on the other hand, wouldn't be nearly as obvious; it would only be after the development of significant physical understanding, and the discarding of several simpler competing theories, that a successor species would figure out why there wasn't nearly as much accessible oil around as their models predicted.

Why would models predict otherwise?

My knowledge of geology is pretty limited. But did we (as a species) not base our models of formation on the empirical distribution of resources that we found? I'm surprised that we have such sophisticated models of planet formation that they give us strong predictions of what distribution there *should* be....

Does it seem at all weird to

Does it seem at all weird to you that, in the three point seven billion years of life on this planet, we are the first species that apparently does what we do?

Perhaps our species is exceptional, but not necessarily our intelligence. Consider the possibility that we are no more intelligent than dolphins, but other circumstances of our makeup mean we can apply this intelligence to greater effect. For instance, our opposable thumbs make us far more flexible than a dolphin's anatomy.

Any human-level intelligence lacking our fine dexterity probably won't look nearly as exceptional relative to the rest of the animal kingdom when viewed over the same timeline that humanity has lived. So the appearance of "human exceptionalism" is probably a convergence of multiple factors, not just intelligence.

I don't doubt for a minute

I don't doubt for a minute that dolphins are awfully smart - maybe as smart as we are. But their intelligence is different in kind because their environment and capabilities are different in kind.

My opinion is that you don't get a particular KIND of intelligence unless you get an organism capable of gaining advantage by that kind of intelligence, in the environment where that organism lives.

Dolphins would not gain nearly as much advantage from our kind of intelligence as from their own; therefore they have not evolved it. Or, to put it another way, their environment has not *PROVOKED* it in them.

Squid and Octopi on the other hand (or tentacle) are mobile, flexible, use habitat that rewards planning and intelligent use of cover more effectively, and have fine manipulation abilities. They definitely DO stand to gain significant advantage from something like our kind of anticipative, planning, speculating, tool-using intelligence.

But another major formant in our own intelligence is social; that is, we amplify the advantage of intelligence by complex social structures that allow and encourage individual adaptive specialization. And while squid and octopi have been observed making and using tools, planning to some extent, and doing some puzzle solving, they have not (yet) been observed building the kind of social or tribal structures that would amplify the benefit. Which likely goes back to reproductive biology, among other things; they're not exactly involved parents.

yes, It looks to me like

yes, It looks to me like humans have evolved to take advantage of cultural knowledge and gained tremendous abilities because of that. Some researchers (like Luc Steels) create experiments to find out what characteristics are necessary for a population to maintain language (our biggest store of cultural knowledge) and one idea that Michael Tomasello what is unique to humans is shared intensionallity.

Shared intentionality

There does not seem to be a binary cut-off for shared intentionality. The richer the system of communication that a species uses the easier it is for us to spot. But intentionality can be communicated through quite simple languages - even those composed of simple gestures. Pack animals show evidence of sophisticated communication during target selection. Something that has been studied in a lab setting is the domestic dog. This summary of results was an interesting read.

Perhaps you are right that

Perhaps you are right that they do not have the same kind of intelligence, we don't know if intelligence works that way, but my point was that we do know one thing: expression of intelligence is limited by physical capabilities. Even if dolphins had roughly the same kind and amount of intelligence, it may not be apparent because they cannot apply it to the same degree we do. They simply can't build structures, make tools or do many other things people associate with intelligence.

If there is no way for

If there is no way for dolphins to apply their intelligence then there also isn't any evolutionary pressure for them to evolve to be intelligent.

Just because they can't use

Just because they can't use tools or other immediately recognizable signs of intelligence, doesn't mean intelligence wasn't advantageous in ways that aren't immediately recognizable.

Trolley Ride

Tying this back to programming languages - has anyone tried to teach programming to dolphins? What would a programming language for dolphins look like? What does that tell us about programming languages for space aliens?

Hmm. That IS an interesting question...

What does a programming language look like if it's built by organisms that think in different *patterns* from human beings? Do we express abstraction in a mode less noun-oriented (man as tool users) and more process-oriented (dolphins as direct actors?)

Does location/time become a more appropriate metaphor than desktop/toolbox? And how is location conceived anyway? Focal sight is only about a 30 degrees cone, so we look in many different directions from the same location and perceive it differently depending on the direction we're looking. So for us location is already an abstraction with multiple aspects. But to something that has echolocation (a 360 degree sense), is location more appropriately conceived as the shape of the containing volume?

And how would a location/time rather than object/toolbox metaphor be expressed as a means of abstraction?

This ... is interesting. It's a more directed and considered esoteric-programming language challenge than most. Usually the esoteric programming language guys are trying to create tar pits, or BAD languages; it would be genuinely different to create a GOOD language for creatures who think as we do not.

I don't know whether to consider this a conlang challenge or a PL challenge. Or both.

navigation-oriented structure?

Somewhere between conlang and esoteric PL, seemingly. I've a conlang project whose speakers are aquatic air-breathers descended from land animals; though unlike dolphins they have manipulatory organs comparable to our hands (which my background document claims is a prerequisite to language, while noting there's no agreement on why it's needed). No grammatical notions of noun or verb (though that was the starting point and the speakers were designed to fit it, rather than starting with that sort of speakers and deriving the no-noun/verb from their character), all content words are vectors. It had already occurred to me that category theory might be elementary math for them, developed earlier than calculus; ironically I hadn't thought about it from a programming perspective.

Space aliens

My own language project has been largely motivated by a desire to produce a language that is properly accessible by space aliens. This has provided a valuable unifying framework. For each new feature, I ask myself: how will this be received by space aliens? It now occurs to me that dolphins may be the perfect test subjects for my language. Maybe as Sea World transitions to fewer captive creatures, I'll be able to acquire a few dolphins on the cheap.

a programming language for a dolphin

A fish that takes commands by squeak?

Radio signals?

Oh come on. The martians had radio long before us, and replaced it with optical fibres.

The grain of truth in what

The grain of truth in what you said is that I do think, as Ray Dillinger so aptly puts it above, "something is for darn sure exceptional, rare, strange, or downright peculiar about us as a species". You also, however, accused me of denying the science when I pointedly criticized only its news coverage, accused me of an opinion-based rather than fact-based approach to reality (I'm quite personally committed to combatting the opinion-based cancer in society), and painted this moderately-strong atheist as religious.

Apologies

I admit my remarks are crass. Whenever we have this argument I get a bit charged up. Definitely there is a lot of room for debate on both sides. Human exceptionalism is the go to argument for ID people, so I fallaciously assumed that your position, and I shouldn't have.

Accepted

It's a pretty intense topic.

Intense and external

(And I suspect it's not particularly relevant to LtU; I'm always surprised when we get super-long threads about quantum physics or sapience. Whatever people are interested in, I guess.)

also true

Well, sometimes these things do manage to loop back into the focus of the site (which is presumably part of why Ehud is so benevolently patient with such digressions); and certainly this one started out as a natural adjunct to the clearly-PL-related topic. But, yes, this one doesn't look like it's about to loop back.

Although perhaps, if it keeps happening, you shouldn't be so surprised. :-P

Agreement on terms...

When I'm speaking very precisely (although, alas, I often fail to be precise) I use different terms for different aspects of what we're talking about.

"Sentient" refers to emotions, desires, first-person perceptions, and what we are pleased to call consciousness. Dogs, cats, hamsters, and people - but so far, not computer programs.

"Intelligent" refers to integration of perception and to the complexity of resulting action or response being well adapted to purpose or need. Different animals have different levels of intelligence. This is what I'm talking about when I say that ELIZA was as smart as grass, that Loebner Prize chatterbots are about as smart as clams, or that Google's self-driving car is probably smarter than a cockroach but not as smart as a gecko. Neural nets are to some extent intelligent, and also capable of "learning" but as usually implemented (feedforward) they can only learn reflex actions - the sort of thing that requires no consciousness at all.

"Reasoning" or "Symbolic Thought" or "Abstract thought" is a particular kind of intelligence, exceptionally good for anticipation, planning, dealing with complex relationships, and tool use. For all that we've tried to emulate it with Prolog and CLIPS-like systems (and ultimately quixotic projects like Cyc), we've gotten not very far with it. None of our "formal" reasoning systems has exhibited even insect-level intelligence even though in many cases they've been more complex than insect brains, and I think it's because Symbolic thought is almost useless without Sentience to guide it.

Finally "Sophonce" is Symbolic Thought integrated with Sentience. So far humans are the best at it that this planet has produced, with some apes and maybe parrots and marine mammals dipping a toe in from time to time but never seeming to go further than that. It requires a big brain which is a huge metabolic investment, but so do echolocation (marine mammals) and deep memory (elephants) - those brains are bigger than ours, and certainly sentient - possibly as intelligent as us, but their intelligence is different in kind.

I think it's because these other species don't get any big immediate advantage in increasing their abstract reasoning capability if that increase comes at the expense of any of the other capabilities they're using their brains for. Abstract thought does an elephant little good, if a drought comes up and she's forgotten where the water holes were in the last drought ten years ago. And if an orca ever gets disoriented because part of that large brain capacity is re-adapted from echolocation to symbolic thought, those genetics won't be passed on to the next generation. But look at us upright tool users with fine manipulative abilities and complex social organization - especially complex social organization that allows adaptive niche specialization. All of those things amplify the benefit we get from sophonce.

sophonce

Interesting. When you get to sophonce, we part company, not over terminology per se but really over a point of theory. You're defining the term sophonce as an intersection of two other things, and I don't think one can ignore the value judgement in that; it implies that the most name-worthy concept in that neighborhood is this intersection of two other things. Whereas I've become increasingly inclined in recent times to the conjecture that there is some specific technical peculiarity about what-we-do-that-thus-far-computers-don't. So for me the most name-worthy concept in that neighborhood is the technical peculiarity. Which, of course, I can't specify precisely or I would probably already be building an AI based on it.

The definition is not the reality.

Hmmm. I think that it was important to me to realize that there are different kinds of intelligence rather than just different degrees. There are lots of ways to produce nuanced, adaptive, responsive and sophisticated action.

What distinguishes the particular KIND of intelligence that humans have is an interesting question, but right now I'm more focused on the question of what sentience or consciousness is, than on intellect.

Of course, this probably arises from my belief that sentience is probably the thing required to drive abstract reasoning in a way that turns it into a real form of intelligence.

And of course little of this means much of anything, because we're throwing around terms we do not have good agreed-upon definitions for. 'Consciousness' for example can be defined by different people in ways that make it laughably trivial (Arthur Murray) or completely impossible (John Searle).

Incidentally I tried to respond to you on your blog but I don't social media, and there seems to be no other way to authenticate.

I conjecture there are

I conjecture there are properties of human thinking that arise due to the technical peculiarity of sapience that probably are not achievable, at least in combination, in any other way that has been tried or approached on this planet. Grasping for the properties feels rather like reaching under the bed for something that's just at the ends of your fingertips, and you're trying to coax it closer without accidentally pushing it further out of reach. But I don't think the two you mention are the complete set (not sure if they're in the set, but there's more to it), and the whole isn't cumulative as an intersection of properties would be because there's that technical peculiarity giving it non-cumulative coherence.

(I've sometimes wanted to comment on posts on blogs where the software said I'd have to create an account, and the last thing I need is another password to keep track of. Ain't technology grand?)

Alarmed

My intuition at seeing deep neural network image processing and such applications as the google car is that we're actually surpisingly on the actual cusp of human-level-intelligent machines. I think we're going to be blind sided when they hit soon.

They might even be hacker creations rather than corporate ones.

Know thyself

Machines have been exceeding human capabilities in sufficiently specific tasks for a long, long time. We routinely measure intelligence by focusing on specific, well-defined tasks, which is great if your objective is to find ways to do more without humans. But show me a specific well-defined task, and I'll show you something that can be done by a non-sapient machine — to emphasize: not just done by a machine, but done by a non-sapient one. Presumably a sapient machine could do it too, but those tasks aren't criteria for sapience. Sapience comes into its own when the task isn't specific, isn't well-defined, and it's sure not obvious how one would, or even whether one could, create an effective standardized test of aptitude for coping with non-standard situations. When we ask what we're good at that machines aren't, it seems we shouldn't expect the answer to be some specific, well-defined task; that's why I simply wasn't all that impressed by computers winning at chess, or jeopardy, or even go. I figure understanding how sapience works could be useful in working out how to recognize it, and how to recognize its absence.

sapience, intelligence, etc.

I suspect that, in creating machines capable of adapting to their environments and learning through example, we'll gain a lot of insight on the nature of human 'sapience' and 'intelligence' and even imagination or dreams.

I would not be surprised to learn our current terminology on the subject of human intelligence is imprecise, inappropriate, or misleading. Analogous to using alchemical terminology before the development of modern chemistry. "How sapience works" may be entirely irrelevant, or may be emergent from a few separate elements like flocking of birds and the 'intelligence' of slime mold.

Perhaps so. Although,

Perhaps so. Although, following the analogy I don't think chemistry has been developed yet. As I've remarked, I'm inclined to think there is something distinctive in the way our minds work that is thoroughly lacking in the — in some ways extremely impressive — AI technology we've developed, and are developing. It's hard to talk about the something-distinctive without giving it a name; I'm using "sapience".

we'll gain a lot of insight

we'll gain a lot of insight on the nature of human 'sapience' and 'intelligence' and even imagination or dreams.

One interesting constraint that I recently discovered is that even human sentience does not require the ability to visualize or dream. A fascinating condition, but implies that sentient machines don't need the kind of imagination we often think they would.

I've heard similar

I've heard similar claims.

It's a bit strange. Can't we work out what such a future will look like, by simulating programming without a computer right now, just by getting two people together? In fact, isn't this a simulation already being run over and over, whenever you get a computing expert + a domain specific expert together?

We already know what the results are, and it's not a future without a general purpose language for expressing logic.

That language may not require humans know how their abstract logic maps to computer architecture - i.e. remove the computing expert - but is that the extent of the claim? Just that it's not a programming language in the 80's, 90's, 00's, 10's (pick your decade) sense of the word?

A theory

So here's a conjecture. If you are not top-notch, and maybe even if you are and depending on how the field evolves, working on AI/ML may end leading you out of CS. Your job may be cool and interesting, but it may be data science in a Stats department, doing BI or what have you. Maybe making big bugs in Wall Street. PLT, for good or bad, in and will remain deep in the bowels and foundations of CS.

Will this line of argument work with people shopping around for grad school opportunities? I don't know.

PL != PLT, PLT is turning

PL != PLT, PLT is turning into its own thing separate from the rest of PL, I think the fork is almost finished.

CS theory/foundational topics have always been niche rather small communities. They don't support 60+ paper count conferences with thousands of adequately funded researchers. It's like post PC: it doesn't mean the PC is dead, it's just no longer a growing dynamic market and people are happy with the PCs they have...no need for many new ones (replace PC with PL grad student). That should be avoided.

PL and PLT

Programming Languages are related to human minds as chairs are to human butts. They are for humans to use and they have to fit.

Programming Language Theory is related to Programming Languages in roughly the same way that engineering mathematics and properties of construction materials are related to the design of chairs.

These are related but they are very different things. If you want to design a good, æsthetically pleasing chair using a minimum of perfectly chosen materials, then you need the engineering knowledge. If you happen to find something comfortable and useful for sitting on, and you aren't worried about its weight or wasteful use of material or how well it fits the space allotted for chairs in your office, then you have found a good place to sit, and maybe inspiration for a whole new line of chairs (and complementary redesigned office furnishings). The engineer may not find that it suits his idea of what a chair ought to be, but if he inspects it carefully he may wind up learning new things about the goals his engineering knowledge can be applied to design for.

Lisp is like a beanbag; push it into any shape you like, but it may not work all that well in systems that make fundamentally unsupported assumptions about the height and shape of chairs. Ada is like a straight-backed wooden chair - well designed and encourages good posture, but it's uncomfortable, and may not work all that well in systems that make a different set of fundamentally unsupported assumptions about the height and shape of chairs.

And PLT for these cases is so different that sometimes people working with one set of assumptions doubt that the languages used by the other are even programming languages at all.

Theory can tell you a lot

Theory can tell you a lot about the formal properties of something (e.g. a type system), but nothing at all about when or how to use it, especially in larger designs when creating experiences rather than...well...doing theory.

So if you want to know how to build a better chair, knowing that the frame is guaranteed to support 200 lbs is only a small part of that. The engineer must also design a good butt experience, and the theory can only tell them about absolute facts. But at least they would know how the chair will/won't break.

PLT seems so removed from much of the practice of abstraction design, especially when you aren't creating a statically typed language like Rust or Haskell. It seems like "maybe useful and definitely interesting" but its hard to pick out what will actually make you a better engineer (and there is a lot to pick at).

For the record, I was

For the record, I was goading you...

By Poe's law, I can't tell

By Poe's law, I can't tell anymore :)

It's been told that I need

It's been told that I need to carry a bell (or, maybe, drop a mike when I am done).

An occasion to reach inwards

I think it's perfectly fine that there is a new field that people are excited about. I mean, working on natural language translation is an extremely exciting idea, and we may be at the point where this finally works! This is not competition for PL research: PL research never worked on natural language translation. There is competition for academic resources (students, funding, positions, conference attendees or what have you), but I see this is a thing of life that is not going to change.

I wouldn't want someone passionate about natural language translation (or image processing or whatever) to somehow be convinced to work on PL instead, nobody wins. On the other hand, it is in everyone's interest to have fair communication on the interest and strength of each discipline, to help people make their choice of what to work on. We should make sure that we communicate effectively on what our field is about, and that we get the world out to as large an audience as possible. We should avoid bullshit marketing (a temptation that is strong as soon as research funding is involved), and I think it's also fair to discourage other fields from using it (I'm looking at you, "semantic web"), because we want everyone's choice to be as informed as possible.

But this should, first and foremost, be an occasion to reach inward and check that the research we are doing is as good as possible. Are the problems that we think are the most important really important? Do we understand where we are going and our relation with other disciplines and practical applications? It's fine to be a beautiful, strong niche field; it is wrong to be an anecdotal, weak niche field (UML ?).

The solution to having a healthy, thriving research community (and I'm not limiting the scope of this to academics only) is to do good research.

That is definitely part of

That is definitely part of it! But we should also think of the undergraduate pipe. Check out HN's latest Who is hiring thread. A CV with deep learning experience will get you at least an interview on 50% of the job postings, but PL? It is enrichment for sure, but it isn't something that anyone seems to be specifically looking for. The impact is then perverse: undergrads choose the ML elective over the PL elective, and never even discover the beauty of our field, they might never make it to the point where they even check out if we have interesting research when making grad school decisions.

maslov

Dunno that 'we' can really do anything much about it.

PL has to have a clear way for newbies to see the connection to either (1) bigcashmoney (Facebook, Google, Apple are all about ML, not about PL in their blogs etc.) or (2) papers that somehow attract whichever sex / source of funding you want to attract (DoD wants to pay for ML, not PL).

'We' don't control those end points.

We can influence our

We can influence our destiny. For example, making PL more about intelligence augmentation (IA, the dual of AI, programming gives people power) and then making that work well could change our fate. More focus on new topics like live programming can also help (I'm biased though). We could also just recognize the crisis at hand and begin to propose/experiment with solutions.

I guess PL still has a decent future if it becomes like CS foundation theory. Not hot, but a decent if niche one. Maybe we could list the hot topics in our field? The last time we tried, nothing really connected.

only if you want to

Sure, it can be good to do better PR about what there is, and it doesn't hurt to consider if there's a better-in-some-multi-dimensional-sense project to work on.

Just please don't try to bend over backwards desperately attempting to come up with experiments that are about making PL look sexy vs. just doing what you personally value. :-)

Maybe we should remind

Maybe we should remind people of some old truths, you know about statistics being applied probability theory, which is really measure theory. While the true fundamental mathematics is, of course, category theory. Or you think that wouldn't sell? ;-)

MIRI is advertising for a type theorist position

Career opportunities at MIRI (Machine Intelligence Research Institute):

Research Fellow in Type Theory and Machine Self-Reference

MIRI is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.