Evocative metaphor

The following is from Alberto Coffa's book The Semantic tradition from Kant to Carnap: To the Vienna Station (p. 153). The intellectual context in which this discussion occurs is a bit complex, and I am not going to try to explain it here (do read the book, if you are interested). While the philosophical debate was obviously not about the semantics of programming languages, I think trying to think how it applies to that case is an interesting exercise...

...what we need is not to add to that system a theory of types, but to throw that system away and replace it by one that does not allow such misleading symbolic configurations. The idea was illustrated at a meeting with Vienna circle representatives, when Wittgenstein agreed that the following was a good explanation of the intended point: In a normal language such as German or English, not only can we formulate the statements 'A is north of B' and 'B is north of A' but we can also formulate their conjunction, and someone could be misled into thinking that there is a conceivable circumstance that this new configuration represents. The Russellian strategy for avoiding this difficulty would be to add a rule forbidding the introduction of conjunctions of that sort. The Wittgensteinian strategy was to adopt a different system of representation, in this case a map, in which one could still say the meaningful, that A is north of B, or that B is north of A, but could no longer display a configuration of symbols corresponding to the old 'A Is north of B and vice versa' (Waismann, Wiener Kreis, pp, 79-80).

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Interesting, but I disagree with the premise of the excerpt

Acknowledging that I'm naive about this, and that I haven't read Coffa's text, it nevertheless seems to me that there is some value in expressing an illogical conjunction--a language should be able to express the conjunction even if its truth-value can't be defined. I'm not sure what that value is, practically speaking, but I find it valuable conceptually. How it maps to programming language is less clear to me.

The philosophical debate

The philosophical debate about this question is precisely part of the context. Simply put, you need to come up with semantics that give meaning to these contradictory statements, and this can lead to various problems, such as inventing a hairy ontology that you may not want to endorse.

The value of the metaphor

The value of the metaphor may break down because natural language is being considered as a poor vehicle for formal semantics, rather than formal semantics being viewed as a poor model of the real-world situations that natural language is robust enough to handle. Here's a real-world situation. Numbered highways in the US are routinely labeled "north" in one direction and "south" in the other, or "east" and "west", and retain those labels continuously along their length even though they aren't set up in perfectly aligned uniform grids. When two such routes share a stretch of road, it's not uncommon for that stretch to be, say, both Route X North and at the same time Route Y East, or the like. Occasionally it happens that a single stretch of road is Route X North and Route Y South. In which case, each of two points could be north of the other. Granted, a map could express this unusual situation (though a natural-language explanation would probably stand a better chance of being understood); but it does seem a familiar phenomenon in programming languages that a generalization/orthogonalization of the semantics may be suggested by the structure of the language.

Semantics can be partial

Given a phrase which is well-formed, it is not necessary for that phrase to be denoting. Furthermore, a phrase which looks context-free, may not in fact be context-free (once delta-expansion is done); so something which looks like it has a definite truth value may in fact depend on the current-world valuation.

The first chapter of Russell's "Principles of Mathematics" is well worth reading, as well as his brilliant paper "On Denoting". Anyone interested in these matters should read these. Not that Russell gives a waterproof theory, but his exposition is really very clear. From there, more philosophical works can be read (and are on my current reading list).

This is indeed an early part

This is indeed an early part of the philosophical context I was referring to.

Representation /= semantics

(I imagine) his point is not to disallow, for example, expressing anti-theorems. The point is to have, in addition to a language which expresses theorems and anti-theorems (and possibly meaningless propositions or formulae), a language which expresses only theorems.

Look at the case of monoids. If I forget the laws, a free monoid is just a set of trees. With the laws, it is a set of strings. It is completely natural to reason directly about strings instead of treating them indirectly via a representation as trees. In fact, it is arguably more natural, since there are plenty of other signatures which are not (the same class of) trees yet which, quotiented, produces the same set of strings; that is, you can implement strings using other, distinct representations.

At the same time, just because you can work with strings directly does not mean you cannot use -- or eliminate the need for* -- trees. Because there are plenty of things which are trees yet not strings.

* You can encode trees as certain strings, but there are many encodings, so you have a choice of representations again.

Two kinds of semantics

it nevertheless seems to me that there is some value in expressing an illogical conjunction--a language should be able to express the conjunction even if its truth-value can't be defined

I agree. It's the tragedy of early C20th philosophy that these very highly powered philosophers mostly did not appreciate Frege's classic article "Ãœber Sinn und Bedeutung", which (to oversimplify) distinguished between two kinds of semantics, a semantics of sense, which is the realm of our thoughts, and a semantics of reference, which is the realm of the external world. It cannot make sense in a Euclidean world to have one thing be in two opposite directions to another, but it makes perfect sense that one's thinking about the spatial relationship between two things contains absurd beliefs. Frege claimed that the relationship between the two was a partial function: senses may, but need not refer, if they do, they have a unique reference.

Wittgenstein was in correspondence with Frege, and appreciated this point. Russell said very kind things about Frege (Russell's paradox refuted Frege's system of the Grundgesetze), but didn't seem to appreciate the value of this distinction.

Phase distinction

In a compiler, there are some weak properties which can be enforced either at the static analysis stage or at the parsing stage. If you allowed (somehow) type 0 languages* as input, then you could in theory express every static condition such as well-typedness at the grammatical level. Then if you forget the parsing stage and only look at its outputs, you will find that you language is inherently well-typed. I think this is the sort of thing Wittgenstein had in mind.

Another example is the difference between set theory and category theory. In category theory you limit yourself a priori to maps which well-typed, so if you work exclusively at that level (only use categorical constructions) you don't need to show soundness, that is, you don't need to show that your transformations preserve typing. But ultimately that is because hom sets are sets and you have made set theory play the same role as a grammar.

* Type 0 languages are the top of the Chomsky linguistic hierarchy, above context-free languages, etc.

Hey!

Everything is recursively enumerable unless decided otherwise. I blame the decision makers!.... *ducks*

The Vienna Circle in lots of context

The intellectual context in which this discussion occurs is a bit complex

But if you want to find out more, the Stanford Encyclopedia of Philosophy has just the introductory text for you, weighing in at a sprightly 28 000 words. Then again, it is probably faster to read the book's summary...

Activity Theory

OK, I guess we understand semantics now, but what do we do with that? I just noticed that Activity Theory has made it to the Wikipedia. Language has uses, and human activity is certainly an example, along with the activity of human artifacts.