Elephant 2000: A Programming Language for the year 2015 Based on Speech Acts

McCarthy's Elephant language proposal was mentioned here several times in the past. This talk from Etech provides a nice introduction to the fundamental idea behind Elephant and its background.

The talk includes interesting, though not entirely motivated, comments related to the paper Ascribing Mental Qualities to Machines. This is one of McCarthy's most significant papers in my opinion, and deserves more attention and debate. It is also rather amusing. I hope I will find the time some day to put this paper in context (McCarthy's comments in the Etech talk notwithstanding), but for the time being I recommend it to anyone interested in this sort of thing.

One thing is for sure: We can safely add to the 2009 predictions the prediction that Elephant will not be ready in 2009...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

One thing is for sure: We

One thing is for sure: We can safely add to the 2009 predictions the prediction that Elephant will not be ready in 2009...

True. I also wonder if everyone is on data and not on logics regarding natural language representation these days ...?

forget Elephant!

Elephant is a bad idea. The "airline reservation example" is perfect for illustrating why Elephant is a bad idea.

Prior to computers, how did people - using natural language alone - manage to implement large-scale reservation systems? However did they do it?

I assure you that nobody anywhere along the line, back then, needed to or even bothered to specify that a client should be granted entry if they made_reservation(x) AND did_not_cancel_reservation(X).

Nobody talks in those terms other than philosophers.

The pre-computing-era version of reservation systems grew organically. A simple version handled early adopters -- maybe someone made some marks on a chalkboard or just held a list in memory. Then people achieved scaling through solving ad-hoc problems as they arose. Schematically:

In the small shop, someone makes up a "system" for writing reservations in a book, or on index cards in a small file. They develop conventions (e.g., "mark verified reservations with an X in the lower-left corner").

Business picks up and keeping reservations takes more labor and resources. Someone else improvises a filing system. The notation system gets richer in detail (e.g., "mark reservations for vegetarians with a circle in the bottom margin.")

It grows further and people improvise new rules that allow them to make use of the telgraph, then telephone.

Computers come along and initially just automate parts of this and then create new opportunities for improvisation on scaling and features.

Every step along the way people are not communicating about metaphysical commitments and fullfilments -- there's no "made a reservation / didn't cancel" as some actually existing real thing in the world. Rather, the true, practical, effective *meaning* of "reservation" turns out to be exactly what certain bureaucratic processes produce as an emergent effect. It's not *really* "didn't cancel" it's *really* "did't have a squiggly mark in the upper right corner of the index card".

What does a "reservation" on XYZZY Airlines mean, these days, compared to a "reservation" at the Ritz in 1939? The way to understand each is to look at what produced it and what speech acts between humans produced it. For example, the speech acts that produced reservations at the Ritz were about how to use the reservation book, the phone, etc. Those conversations were very much about data structures and algorithms.

There's nothing "natural" at all about the approach Elephant proposes. People don't talk that way.

-t

Do you think actual

Do you think actual reservation is more some kind of stigmergic activity whereas Elephant is more adherent to Platonism which has slowly lost credibility in philosophy at the time as it got a second-life in cognitive sciences and computing?

well, yes.

I did have to follow the link to follow you. But, yeah, basically, that's right, sorta. That's pretty concise, the way you put that! Not sure it would (as a statement) function to lead anyone to what we're talking about but it does sum it up in some way. But maybe, ironically, that's a false 'summing up' that recapitulates the Elephant mistake.... that'd be a kind of simple enough explanation of what would result if we tried to build on your summary.

What's the difference between "theory" and "compression"?

-t

I don't think stigmergy

I don't think stigmergy covers the full scope of your objection. The locus classicus for this objection to AI is in Hubert Dreyfus's book.
Be that as it may, I think that regardless of the rhetoric Elephant raises issues about software design that are unrelated to the debates regarding AI.

Ascribing Machine-LIke Qualities to Man

If you want to know what formal reasoning does to a man, read some Baruch Spinoza and draw your own conclusions.

Do you want to buy AI?

In my opinion there has always been something wrong with the AI agenda from a sales or marketing point of view. There is a confusion of human and machine qualities that turns people off even though it makes some sense. As McCarthy eloquently explains there are good reasons to apply human qualities to machines, but this also opens the door to the reverse problem of assigning machine qualities to people. People who don't know the intellectual game at work here are immediately turned off as well as some people who do.

Machines and computer need a human face to be acceptable. It is the game and the interface that sells the system. Nobody wanted to buy personal computers with command line interfaces. It was windows that made the PC market. But the "game" only seems to get deeper.

The chart at the bottom of the Hubert Dreyfus page (What Computers Still Can't Do) above is interesting in this respect. Columns I, and II are decidedly machine-like while column IV is decidedly human like. Column III might represents "bridge" problems. Getting back on topic, Situational issues are prominent in III and IV. This is where Speech Acts are usefull.

Column IV looks impossible within the "classical AI agenda", but is conceivable at least from a Cybernetic point of view. Well-known approaches are Neural nets and Fuzzy Logic. Analog computing is quite usefull here but doesn't sell, that is, unless you know the "game".

My comment on Spinoza

It's been a long time since I read philosophy, and I am certainly no philosopher, but yeah, I read stuff. I think Spinoza applies here on multiple accounts. In lay-man's term.

He is one of the first to state, in the end, an almost materialistic vision of the world, in which God is seen as the equivalent of an ordering principle, and moreover, the whole world is deterministic. A clock once set in motion ruled by the divine. Except for his vision of God, and the role of man therein, his view closely correspond to that of Denneth who just brings an atheistic view and a lot of current-day empiristic evidence that man are not more than machines.

You need a machine-like vision of the world if you want to ascribe human qualities to machines, or vice-versa, machine-like qualities to man. In the end, the basic assumption is that at a fundamental level there is no difference.

Some of the fundamental questions of AI are not that different from questions asked long ago. It is not that different to ask whether a stone, a tree, a clock, an ant or a computer can be ascribed human qualities. Spinoza certainly had a vision on that, which I like more than Denneth's.

He was also one of the first to formally treat meta-physics, and tried to apply that to, and bring an order to, concepts such as "ethics" or "state", which he exposed in several tractatae.

His latter writings, can be seen as examples as to what will happen when machines will apply formal reasoning to physical or meta-physical concepts. You end up with some very concrete and narrow reasoning. I don't think life is like that. And any machine with sufficient AI, which "thinks" formally, will, I believe, be viewed by people, as essentially, as mad as a hat.

[Note that all of this is not really a comment on the paper, which exposes a more pragmatist, in the Peircian sence, and materialistic view on ascribing human qualities.]

I wonder about your fidelity

I wonder about your fidelity in cybernetic approaches, Hans? So far I haven't seen anything in this domain that was free from a pre-established context and situation. That's not in any way different from GOFAI ( e.g. Cycs infamous "Kindergarten context" ) except from adding some adaptive capabilities that act as simple rulers that let one reaches a stable state of emergence of e.g. the numbers of node weights in NNs. It is more close to stabilizing temperature of my living room by reinforcing measurement values to the thermostat than the catastrophic activity of creating/destroying contexts, situations and significance.

To reduction, or not to reduce?

I wonder about your fidelity in cybernetic approaches,

I think you mean that a Cybernetic approach is no better or different than say a logical or a strong AI approach in "some" sense. I readily agree. The sense in which they are the same is that they both abstract reality into something that it isn't. Let us call that something an artificial language. My point is that such a thing is "as good as it does" in the unforgettable words of Forest Gump. Or was that actually "smart is as smart does". In any case it comes down to utility not abstraction.

Abstraction is a game in which one thing readily becomes something else. It can be useful or destructive. A hammer and a saw can be used to build something or to distroy a perfectly good piece of wood.

I see Cybernetics as a language theory based on the system/state abstraction. In the context of this thread the important fact is that it abstracts observations. It doesn't reduce. It can be transformed and manipulated in many ways but is never any better than the initial abstraction. It is useful as long as it fits the observations but is not equal to the real thing.

The system was formalized in 1963 in the book "Linear Systems Theory" by Zadeh and Desoer. Cybernetics on the other hand is more than a theory. It is a culture, an experience, and a growing archive of useful things including systems based on software.

Also Cybernetics doesn't create unnecessary boundaries, a software artifact can readily be seen as part of its environment. This is part of the problem with strong AI. The AI agenda is strongly software oriented where cybernetics does not draw lines between parts or even say what the parts are made of.

Video interview of John McCarthy