## New result re "linguistic determinism"

Hunter-gatherers from the Piraha tribe, whose language only contains words for the numbers one and two, were unable to reliably tell the difference between four objects placed in a row and five in the same configuration, revealed the study.

A new study may provide the strongest support yet for the controversial hypothesis that the language available to humans defines our thoughts.

The result is controversial enough that I withhold judgement until I read the journal paper.

## Comment viewing options

### I'm a little teapot

Er, no.

I can believe that language helps us think, but I don't believe it defines the limits of cognition. I see formal methods as an example of how language helps us think, but I see Gödel's Incompleteness Theorem as evidence that humans can think outside of language.

Just because these people had difficulty with some idea which can't be expressed in their language, that doesn't mean they can't learn it, which, I think, is what Sapir and Worf suggested.

If we can only learn things we can express in language, how do we ever learn anything, given that language is acquired after birth? (OK, I can see an argument based on Universal Grammar and nonverbal communication, but isn't that a bit fanciful?)

### There's something to it

Words are like (almost) unique ID for concepts. Consider the concept of a monad. The first time you read about a monad, you probably don't really understand. Every time you encounter "monad" you get more familiar with it.

If there wasn't a word for the concept of a monad, it would be hard to make the connection everytime someone writes about it.

Having a unique ID for a concept helps your brain to give it a central spot early on, and therefore helps the learning process enormously.

### ...that language defines thoughts..

1. The article makes an unwarranted claim of causality. Is the language of the piraha limiting their thoughts or vice versa?
2. As Frank has pointed out, the Sapir-Whorf hypothesis is interesting because it claims the cognitive limitation is:
• Permanent. My language doesn't have 12 words for snow (well, perhaps it does), but if I lived with Eskimos I'd probably pick up the ideas those words stand for.
• Fundamental. Without knowing the specifics of the Eskimo taxonomy I can imagine breaking the concept 'snow' down into categories.

This article simply shows that I would not do very well assigning to the Eskimo categories in the beginning.

### Terms for snow

My language doesn't have 12 words for snow

Any ski resort has at least that many terms for snow, like Corn Snow, Frozen Granular, Hard Pack, Icy Patches, Loose Granular, Machine Groomed, Mogul, Powder, Packed Powder, Roto-tilled, Wet Granular, Wet Powder. Anyone planning to attend ICFP04 in Snowbird, Utah might want to acquaint themselves with some of these. ;)

### Trolls

Terry Pratchett has a good line somewhere about how Trolls can only count One...Two...Many, but as Many is a number for Trolls this means that they can actually count as high as everyone else, in base 3: one, two, many, one one, one two, one many, two one, two two, two many, many one, many two...

### Great line!

That's a great quote. Does anyone know where it's from exactly?

### We discussed this in the past

The linguistic relativity meme was discussed here quite a few times. IIRC at least twice we had long dicussions about the claim as regard natural language - and the possible connection to programming languages.

### Backwards implication

Imagine if we observed that 19th century speakers of English lacked a word for "cellular phone". We then resurrect Charles Dickens and give him a cell phone asking him to perform various tasks with it.

Strangely, he does very poorly at these, so we conclude that the lack of the word or expression in his language has impaired his ability to use cell phones.

This is exactly the line of reasoning we are prepared to accept about some exotic group far away in the jungle. Why does it seem stupid when we apply it to someone we don't think of as quite so exotic?

### False analogy?

Indicting the Piraha for lacking a word for cell phones would be stupid. But the point of the article seems (I've not read it) to hinge on the idea that natural numbers greater than 3 exist even in the world of the Piraha, yet they lack a name for it. For example, there are (one imagines) more than 3 pebbles in their environment, yet they can't denote the concept (directly) in their language.

However, two things occur to me.

First, I can imagine that perhaps hunter-gatherers don't have much use for numbers over 3. Indeed, if you reject Sapir-Whorf, you might well use the fact that they don't have words for such numbers as evidence for that proposition. (Naturally, that would say nothing about the truth or otherwise of Sapir-Whorf, though.)

Second, the argument that "4" and "5" "nevertheless exist" in their world is a bit specious. It is true that mathematical concepts exist everywhere, but consider the example Sjoerd suggested, of monads. Monads are a mathematical concept. They exist in the Piraha world as well as our own. Yet the number of people in the world who know what a "monad" is is probably very, very small. And small wonder, since few people have (or see a) use for the concept; it's only once you find a use that you are likely to acquire (or invent) the vocabulary.

### Mathematics are a technology

Monads are a mathematical concept. They exist in the Piraha world as well as our own.

I'm starting to think you are a Platonist, Frank. ;-)

One of the reasons that I think cell phones ARE a good analogy is that mathematical concepts are a technology, and exist within a cultural context.

Because numbers are very important to the good functioning of our culture, and since we learn about them quite young (and don't forget how many years of our lives most of us have spent learning about them), I think we forget this fact and assume they are just "natural" concepts we pluck from the world like ripe fruit.

I don't have to go to another culture to find people who don't perform basic arithmetic very well, yet who still manage to live productive lives. A culture that didn't have a monetary system and didn't rely on engineering tasks could easily function quite well without worrying about exact numbers.

Another element of the cell phone analogy that I think is apt is that it demonstrates that a language can easily innovate words for new concepts. We do this all the time, so much so that we take it for granted. If language shaped or limited experience in the strong Sapir-Whorf way, this would not be possible.

If the Paraha decided they needed more numbers for their culture to function, they would borrow or invent words for them, and that would be that.

I think the only reason this story has made it to the media is that the Sapir-Whorf hypothesis appeals to our sense of exoticism, and therefore has good entertainment value.

### Language evolution

If the Paraha decided they needed more numbers for their culture to function, they would borrow or invent words for them, and that would be that.

If some group has to change its language (e.g. create new words) to cope with new concepts then, IMHO, without the change they couldn't express/discuss/understand the ideas.

This is similar to semiotics: a signifier (the form we use to denote some concept) is combined with a signified (the concept itself) resulting in a sign. Without the signifier we can't build a sign and use it to understand the signified.

If some group has to change its language (e.g. create new words) to cope with new concepts then, IMHO, without the change they couldn't express/discuss/understand the ideas.

Perhaps some of the difficulties in this discussion could be clarified if we refer this back to the analagous properties of PL semantics that we are all more familiar with here.

The reaction I have to the above statement is exactly the same I would have if someone claimed here that "because C or Assembler lacks a primitive for closures it can't express them".

This is evidently false; I can build closures in these languages, it will just require more effort and there may be more than one way of doing it. Furthermore, someone who knew C very well might have to work harder to understand the resulting program, since the idea is new to him, but would not be prevented from grasping what it does by this fact.

Some of the semiotics stuff you quote I think obscures more than it clarifies and it really has a different purpose than the standards of meaning that we tend to use here at LtU. (To go into this, though, would be even more off topic than the rest of this thread. ;-) )

The reaction I have to the above statement is exactly the same I would have if someone claimed here that "because C or Assembler lacks a primitive for closures it can't express them".

I never talked about primitives. What I'm said could be translated to PL as "because in the standard library of language X there's no function to do Y, we can't talk about Y in language X unless we create it ourselves (i.e. create the abstraction and name it)". It's impossible to think formally about something you can't understand accurately and the first step to understand anything is figuring out what are the properties that make this new thing different from another. What I'm claiming is that if we don't identify that new concept with a distinct name that process becomes much more complicated because human beings are lousy in separating names from ideas.

BTW: While I think that a genius programmer could truly understand closures|call/cc|HOFs|monads|objects|whatever in a language that lacks them, the process is much harder because the concept will be mixed with the particular implementation details (e.g. setjmp/longjmp vs. call/cc, structs with function pointers as objects vs. Self-like objects).

### Not my argument

BTW: While I think that I genius programmer could truly understand closures|call/cc|HOFs|monads|objects|whatever in a language that lacks them, the process is much harder

We are in full agreement that naming a concept can help in its use.

What I'm disputing is the idea not already having a name for something prevents understanding it; you can always just create a new name for something you understand!

### Abstraction and Naming

What I'm disputing is the idea not already having a name for something prevents understanding it; you can always just create a new name for something you understand!

You still need to define it properly if you want to understand it. Otherwise you don't know where this concept ends and where another begins. It doesn't need to be a word, any kind of signifier will do the job, but without that we can't form a coherent whole of the idea.

For example there are forms of thought that can't be described to someone who is foreign to them (e.g. dreams, hallucinations, psychosis) and the original subject won't be able to understand it unless he can manipulate it in an atomic form. To that he need a signifier, usually a word or a mental image, to attach some "tag" to that concept. Without this "tag" he won't able to distinguish it from a similar but different one. We do that in PLT all the time, call it variables, combinators, design patterns, etc., we attach "tags" (most of the time names) to concepts. In your example if I wasn't able to attach to a certain piece of code some properties I wouldn't be able to abstract it in the form of a function. Like I said this is related to semiotics and it's pretty much basic stuff.

### Turing Completeness

The reaction I have to the above statement is exactly the same I would have if someone claimed here that "because C or Assembler lacks a primitive for closures it can't express them".

This is evidently false; I can build closures in these languages, it will just require more effort and there may be more than one way of doing it.

Doesn't this depend on the underlying language being sufficiently expressive? Is there some sort of equivalent to Turing completeness for concepts expressible in natural languages?

### HL Expressiveness

Is there some sort of equivalent to Turing completeness for concepts expressible in natural languages?

The closest thing to Turing completeness I can think of is the ability to add vocabulary. A formulation of this would be "any purpose that one language can be used for, any other language can be used for, given that new vocabulary can be added to the second language".

Since the semantic domain of HLs is so much less understood than for PLs and computation, this can be nothing more than a pretty compelling hypothesis.

### OT: Semiotics

Some of the semiotics stuff you quote I think obscures more than it clarifies and it really has a different purpose than the standards of meaning that we tend to use here at LtU. (To go into this, though, would be even more off topic than the rest of this thread. ;-) )

Semiotics can be used as a tool to help us distinguish concepts from the names we use to describe them. When we talk about cognition and linguistics we are just a step away from semiotics. I think it clarifies this discussion and ignoring it will make us waste time discussing about words vs. concepts until we reach the semiotics basics.

### Re: OT Semiotics

I think it clarifies this discussion and ignoring it will make us waste time discussing about words vs. concepts until we reach the semiotics basics.

I understand what you mean, but I don't think we have to get into that to show the fallacy of linguistic determinism. I think that using plain language and examples from PLs (as I have been doing) the logical problems with this idea can be exposed.

Furthermore, this mode of presentation will be of more interest and relevance to LtU readers.

### If the Paraha decided they

If the Paraha decided they needed more numbers for their culture to function, they would borrow or
invent words for them, and that would be that.

No. The original paper describes how the Piraha tried to do exactly this (they hoped numeracy would give them more control over their bartering with Brazilian traders). The paper's author organised classes for this which were abandoned after a year as a complete flop; pretty much a "gavagai" situation with e.g. the Piraha not agreeing that it was sensible for a problem to always have the same answer.

### "Need" means willing to pay the price...

No. The original paper describes how the Piraha tried to do exactly this (they hoped numeracy would give them more control over their bartering with Brazilian traders).

I found this story in Everett's paper; whether it is in the Gordon paper that started this thread I don't know, as I wasn't able to access this one in primary source.

Everett's conclusion is that, though they wanted the benefits of numeracy for some purposes, other cultural values impeded this, i.e. they didn't like the idea that there was a "correct" answer to a number problem.

So overall, their culture has decided that the cost of having numbers outweighs the cost of not having them, i.e. they don't really need them.

### Transgressing the boundaries of numeracy

Everett's conclusion is that, though they wanted the benefits of numeracy for some purposes, other cultural values impeded this, i.e. they didn't like the idea that there was a "correct" answer to a number problem.

How postmodern of them! ;)

### Can't pay? Won't pay!

Sorry, I should have linked. I was indeed referring to the Everett paper linked elsewhere in this thread (which you seem to have found). "Willing to pay the price" - I'm sure that any human culture can, in principle, change over generations to acquire the competences of any other. But I don't think that's really the issue here.

Interestingly, re your example of mobile phones, the Everett paper suggests the Pirahã would really not take to them.

### What has it got in its pocketses?

So overall, their culture has decided that the cost of having numbers outweighs the cost of not having them, i.e. they don't really need them.

Sounds familiar, eh? :) (I am thinking of a certain discussion which keeps igniting on this site, and with which I am perhaps closely associated.)

### Plus que ca change...

Sounds familiar, eh? :)

Yeah, I kept finding various similarities to that "other discussion" in this thread as well... ;-)

### Semi counter study

This study shows that infants who admittedly don't yet use a language themselves but have been exposed to it, can catch distinctions that aren't made in the language.

### Closer to Home

Scientific American's online coverage is brief.

It's best to focus on the data. If these people cannot distinguish four things from five, what alternative explanations exist? Maybe there is one, I don't know. The Eskimo-experiment equivalent result would be an inability to distinguish different types of physical snow.

I wonder whether the fabled historical development of the "zero" concept bears any relation. That question is closer to home, being part of our own heritage.

The interested may care to Google "Einstein gulf" for his thoughts about language. I am just asking questions.

If these people cannot distinguish four things from five, what alternative explanations exist?

There are lots of programmers (sadly) who cannot distinguish untyped languages from typed languages, or imperative assignment from functional let, or implementations from specifications, even when they have the vocabulary. Sometimes it is hard to see, in a definition, what is relevant and what isn't. Many people think, for example, that a language is statically typed only if it requires type annotations/declarations; yet, of course, some typed languages don't require them. Maybe the Piraha are just having trouble grasping the essence of large numbers; it doesn't mean they won't figure it out eventually.

### Sapir-Whorf too close to home?

Maybe the Piraha are just having trouble grasping the essence of large numbers; it doesn't mean they won't figure it out eventually.

But in the process of figuring it out, they will develop language for it, which arguably leaves even the strong form of Sapir-Whorf intact. Presumably, Sapir & Whorf knew that languages evolve. That means there's always the possibility for a previously inexpressible concept to become expressible in a future version of a given language.

If it's the case that the Piraha cannot fully comprehend that which their current language cannot express, and thus cannot currently understand some simple concepts that exist in other languages, that would support the strong form of Sapir-Whorf. That doesn't preclude their gaining such understanding, and corresponding language, in future.

Many people think, for example, that a language is statically typed only if it requires type annotations/declarations; yet, of course, some typed languages don't require them.
Along these lines, I would argue that the lack of vocabulary to refer to the static type-like information which is latent in so-called "untyped" programs has inhibited thought on the subject, and led to errors such as the assumption that a "Univ" type sufficiently captures this information. Sapir-Whorf strikes again! :-)

### Are you joking?

If it's the case that the Piraha cannot fully comprehend that which their current language cannot express, and thus cannot currently understand some simple concepts that exist in other languages, that would support the strong form of Sapir-Whorf. That doesn't preclude their gaining such understanding, and corresponding language, in future.

You talk as if, for example, the word-form "boat" magically inserts all the information about boats into one's head.

I'm happily clueless about water travel until one day someone utters the sound "boat" in my ear and suddenly I know not only what the word refers to, but suddenly I can understand trade winds and sailor's knots.

Obviously (or perhaps not to some) I have to have acquired some concept of a boat (perhaps an erroneous or incomplete one) that I can assign to the word-form "boat" in order for me say that I know what that utterance refers to, or even that it refers to something at all, rather than being a meaningless sequence of sounds. Such conceptualizing would be impossible if it was precluded by my lack of a word-form.

### Words vs. concepts

Obviously (or perhaps not to some) I have to have acquired some concept of a boat (perhaps an erroneous or incomplete one) that I can assign to the word-form "boat" in order for me say that I know what that utterance refers to, or even that it refers to something at all, rather than being a meaningless sequence of sounds. Such conceptualizing would be impossible if it was precluded by my lack of a word-form.

AFAIK humans don't learn abstractions directly, instead we generalize from concrete cases. So what would happen is that we would see a boat and not understand it. Then we would create some (incorrect but useful) analogy based on something we know. Even if we understood how a boat worked still the word would even be employed to denote that particular boat. After seeing other boats we would be able to generalize the idea and use the boat word to denote other kinds of boat. If we never created a special word for it and used "Jim's thing that doesn't sink in water" or "big piece of carved wood" we wouldn't be able to formalize the distinctions it has from other floatable stuff from Jim or coffins.

If we don't name an abstraction, how can we talk about it? If I say "warm fuzzy thing to go back in time" not only I won't be able to discuss it with everyone but I'll have a hard time in trying to differentiate it from a DeLorean. OTOH if I say back-tracking monad I can't possibly confuse it with Doc Brown's invention.

### Naming Abstractions

Daniel Yokomizo: If we don't name an abstraction, how can we talk about it?

Y do you want to know? ;-)

### Talking doe not equal understanding

If we don't name an abstraction, how can we talk about it?

You seem to be equating "talking about something" and "understanding it".

I think these are two quite separate things, at least for the simplest meaning of understanding.

### It's about thinking, as much as talking

You seem to be equating "talking about something" and "understanding it".

I think these are two quite separate things, at least for the simplest meaning of understanding.

I would restate what Daniel wrote as "If we don't name an abstraction, how well can we think about it?" More in my other post, "Thought without language?"

### Thought without language?

No, I wasn't joking (although my final paragraph involved a certain amount of teasing). Note that the quoted sentence of mine was entirely conditional, i.e. it began with "If it's the case that...". I wasn't expressing an opinion about the validity of Sapir-Whorf, I was simply pointing out that Frank's point did not seem to negate Sapir-Whorf at all, not even the strong form.

As for what I think, I pretty much agree with what Daniel is saying. It's obvious that we hardly ever have vocabulary for things we haven't yet discovered! I have no doubt that this is in fact a huge impediment to discovery, but it seems like an unavoidable one.

However, it's not the only impediment: existing vocabulary, or language, may make it harder to discover new concepts. For example, to use the Piraha example, notwithstanding its various possible flaws, if the Piraha have a word for "many", that may inhibit them from looking for finer distinctions among the various kinds of "many" (assuming the Piraha don't make the same breakthrough made by Terry Pratchett's trolls). Lack of vocabulary or language not only inhibits an individual's thinking, it also inhibits discussion, which further adds to the difficulty of expanding conceptual horizons.

We've learned to systematize the building of vocabulary and construction of languages - we create languages consciously, and we extend our vocabularies almost as a matter of course. We tend to take this for granted, having a tradition of it going back thousands of years. That doesn't mean, though, that lacking the language for something doesn't inhibit our thinking: it may just mean that we have a potential way around the limitation, if we try hard enough.

In a sense, I think the Sapir-Whorf discussion is itself artificially limited by language. What is the nature of our thought processes in the absence of language? Don't our thought processes necessarily use a kind of language, even if it's not the usual spoken or written language shared with others? For example, we may recognize some concept and be able to say "ah, that concept", even if we aren't consciously aware of a specific name for it.

The first lesson that our study of artificial languages teaches us is that abstraction via naming is one of the most basic requirements of useful language. But "names" don't have to be ordinary words in a language - they can be De Bruijn indices, for example. Our internal languages may involve abstractions which don't have names that we're consciously aware of, even if we can perform e.g. equality comparison operations on those abstractions. But again, our experience with artificial languages tells us that this kind of thing is not very scalable! We need names for things, and we need to be able to use those names in different contexts to denote those things, otherwise we won't get very far, even only inside our own minds. But once communicable names are assigned to concepts, you have language. Thus lack of communicable names for a concept certainly seems to indicate a lack of understanding of the concept in question.

The question is what that says about our ability to improve our understanding when it is lacking. I think it's quite reasonable to think that existing features of language structure, and the degree to which existing vocabulary overlaps the concepts being sought, can inhibit improved understanding of those concepts. That's essentially Sapir-Whorf, although how strong or weak an interpretation it is depends very much on the specifics of how it's applied.

### Still backwards implication

But once communicable names are assigned to concepts, you have language. Thus lack of communicable names for a concept certainly seems to indicate a lack of understanding of the concept in question.

Again, let us appeal to naming in PLs to clarify. If I suddenly find there is some snippet of code that recurs in my program, I identify it (which assumes that I understand what it does since I recognize that it is functionally the same as the other occurrences)and give it a name.

Does this make it easier to work with? Yes, but that is not the same thing as saying I was impeded or prevented from understanding before.

If we reverse our thinking about this, and say that a culture whose language DOES have a word for a concept is one that has discovered a concept and values it enough to name it, we are stating the obvious.
This is sensible implication: the existence of concepts in a culture tends to lead to there being a word for them.

The reverse implication that the existence or lack of a word creates or impedes a concept is putting the cart before the horse. You can identify a concept with a name, but having a name doesn't give you a concept.

### Assuming the conclusion

Again, let us appeal to naming in PLs to clarify. If I suddenly find there is some snippet of code that recurs in my program, I identify it (which assumes that I understand what it does since I recognize that it is functionally the same as the other occurrences)and give it a name.

You're assuming your conclusion here. You're assuming that this unnamed concept is easily spotted by someone who hasn't previously been aware of it, or had a name for it. But we know that's not always true: just look at continuations, for example -- a concept that's present in every language with control flow, but which is notoriously difficult for people to grasp if they've previously only been exposed to languages with certain kinds of control flow, like call/return. Such people tend to learn continuations more by learning new language to deal with the concept, than by restating it in their existing languages (and such restatements tend to be misguided).

The process of identifying a previously unidentified concept is not necessarily as simple as you're making it out to be, and at least one of the factors that can complicate it are the assumptions that the language being used makes with respect to e.g. related concepts.

Does [naming] make it easier to work with? Yes, but that is not the same thing as saying I was impeded or prevented from understanding before.

That's not what I'm saying. I'm saying that there may be features of a language, including assumptions implicit within a language, or vocabulary which is misleading, which inhibit both the expression and discovery of certain concepts, when the exploration is performed within that language. For people who don't use the technique of inventing new languages for exploring new subjects, the existing language(s) that they depend on may indeed inhibit their thoughts. (To tie this to another thread, that's the biggest reason why knowing Python makes you a better Java programmer.)

[edit: Even people who do create new languages to help them think may not recognize the need for a new language if existing languages misleadingly seem to be good enough - which is what I've been claiming is the current state of type theory ;-]

The reverse implication that the existence or lack of a word creates or impedes a concept is putting the cart before the horse.

Again, this seems to be an assumption you're making. What are you basing it on?

In any case, bringing it down to individual words for individual concepts runs the risk of oversimplifying. There are interactions between features of a language and meanings in a vocabulary.

You can identify a concept with a name, but having a name doesn't give you a concept.
That's not what I'm claiming either. But having e.g. misleading names may blind you to distinctions that you might otherwise notice. And again, it's not just about individual name/concept mappings.

### You are not talking about Sapir-Whorf

You're assuming that this unnamed concept is easily spotted by someone who hasn't previously been aware of it, or had a name for it.

The generally understood claim made by Sapir-Whorf (leaving aside whether this is what those gentlemen actually said or not) is that a language, by its structure or vocabulary, can limit the range of ideas its speakers can think.

To show that this is false merely requires that I show that it is possible to understand an idea without any specialized help from one's language.

"Relative ease" is a human factor. As we've proven in the past, no productive arguments can be had on LtU about this until and unless someone comes up with a widely accepted theory for these. ;-)

If you are not arguing for Sapir-Whorf as presented above, then we are not talking about the same thing.

### Am so :)

The generally understood claim made by Sapir-Whorf (leaving aside whether this is what those gentlemen actually said or not) is that a language, by its structure or vocabulary, can limit the range of ideas its speakers can think.

Right. In my original post above, I pointed out that for this strong form of the Sapir-Whorf hypothesis to make any sense whatsoever, it has to take into account the possibility of language evolution, since that's an observable phenomenon in all languages, which would otherwise trivially falsify the hypothesis. If you factor this in, then the theory really needs to be stated with respect to the language that is prevalent at a given time - that one's current language(s) limit the range of ideas its users can think.

That hypothesis seems to me quite uncontroversial, if not obvious. It inherently recognizes the idea that language evolution is possible, so that it's possible to extend the limits of one's language - however, all of human history has shown that this is done mostly incrementally. We can think around the edges of what we already know and have language for, but it's difficult, and a slow process, which is why invention and theorizing is not something that everyone does well, and why it's taken us 10,000 years to go from accounting for sheep and grain, to using lambda calculi for computation. Now, not all the difficulty in invention or theorizing has to do with language, but I've given examples of the kinds of ways language can be an inhibiting factor.

As for what is "generally understood" about the Sapir-Whorf hypothesis, I think one has to be quite careful of that, since the hypothesis has potentially wide-ranging political consequences, and many people would prefer that it be easily dismissable for that reason alone. Somebody else said "Would the strong version of the Sapir-Whorf hypothesis ever die!", but I think the reason it won't die is because there are clearly deep connections between cognition and language, and if you credit Sapir and Whorf with a certain amount of common sense, and read some of what they actually said, you might find that the so-called strong versions of their hypothesis seems stronger than anything they actually said. Here's an example, a quote from Edward Sapir:

Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of the particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language and that language is merely an incidental means of solving specific problems of communication and reflection. The fact of the matter is that the "real world" is to a large extent unconsciously built up on the language habits of the group.

This, along with similar statements, is traditionally interpreted as linguistic determinism in its strong form. But I have trouble making the connections between the quoted text, and the way in which the strong form hypothesis is stated by others (perhaps I live in a world linguistically closer to Sapir-Whorf's world than those others do ;)

Basically, the so-called strong form of Sapir-Whorf seems to me to be an oversimplified hypothesis, distilled by others from the actual source writings -- perhaps primarily to allow for its convenient dismissal, whether for political reasons or simply an unwillingness to tackle a very challenging subject.

To show that this is false merely requires that I show that it is possible to understand an idea without any specialized help from one's language.

This is an example of the situation I've described. Sapir and Whorf don't seem to have made any claims which can be refuted that easily. As with my point about language evolution, I think that intellectual honesty demands that the hypothesis be given some benefit of the doubt in interpretation, given that it was necessarily communicated in natural language, and not even all that precisely.

One interesting question is how much it is possible to understand without help from one's language, without actually modifying the language.

"Relative ease" is a human factor. As we've proven in the past, no productive arguments can be had on LtU about this until and unless someone comes up with a widely accepted theory for these. ;-)
Unfortunately, the only interesting discussion that can be had on this topic, has to take place in a much less black and white place than the so-called strong version allows for.

### You might be interested in this:

http://xyzzy.bravehost.com/NULL.html

### Me, an idiot? Never!

Sorry. I posted the wrong URL. That was from a potential member of a webring I manage.

Here's the correct one: http://itre.cis.upenn.edu/~myl/languagelog/archives/001387.html

And you'd might as well look at this too: http://itre.cis.upenn.edu/~myl/languagelog/archives/001389.html

Frank - I can distinguish four physical things from five: 4X != 5X, where X is a rock or a sound. I don't even need the vocabulary for X in order to count X. Show me 4 whatsits and 5 whatsits on a table, and I can count them.

That experiment isn't like yours, which tests differentiation between one abstract mental construct and another: a vocabulary test, not a physical counting test.

I don't defend any conclusions proposed by the experimenters, but only focus on what they did, not alternative experiments, valid and interesting as they may be.

I can distinguish four physical things from five: 4X != 5X, where X is a rock or a sound. I don't even need the vocabulary for X in order to count X.

Sounds like a parametric polymorphism.

I think there are still requirements for X - you need to distinguish one X from another to count them. Imagine an average (non-musical) person listening to a simphony. He will probably be able to count drum beats, but what about measures, phrases, sentences? The more complex the X, the less chance a person will be able to distinguish it unless trained (and training assumes cultural compatibility with person's memes). I expect neolithical hunter to listen to howls and growls in the music, despite all the instructions we give to him.

### Counting is a technology

It's best to focus on the data. If these people cannot distinguish four things from five, what alternative explanations exist?

The description of the tasks given are ones in which counting are necessary. Because we all learn to count in kindergarten, we assume this is a universal skill that requires no effort to learn.

However, counting is a learned skill. The Piraha clearly do not value it or learn it, hence they don't do it well. (Or really understand it that well.)

As a result of this, they have no need for words representing numbers, and have never bothered to invent any, much as they might not have a word for "cell phone".

The Eskimo-experiment equivalent result would be an inability to distinguish different types of physical snow.

The "Eskimo words for snow" myth has been definitively debunked by Geoffrey Pullum, as per the reference someone gave earlier in the thread.

I wonder whether the fabled historical development of the "zero" concept bears any relation.

I think this analogous. Zero is a technological innovation. It is not obvious that you can add nothing to something or multiply something by nothing.
Once you accept that you can multiply by nothing, it is not obvious that shouldn't be able to divide by nothing too.

We all had to learn these concepts at some point and someone had to invent or discover them before that so that we could learn them, or that our teachers could think it was important that we should.

### Re: Counting is a technology

The Piraha do count; they have already learned the technology. Surely we are not talking about a mental leap equivalent to the invention of counting itself, or zero, or negative numbers? Maybe we are.

### Do they count?

The Piraha do count; they have already learned the technology.

This seems to be contrary to the claims made by Everett and Gordon.

### But how?

The PirahÃ£ were supposed to respond by laying out the same number of objects from their own pile.

If numbers are so unimportant, I doubt the language would have a way to express "the same number". How would the researchers be able to properly instruct the PirahÃ£? Maybe they were trying to match the pile by value or by volume.

The PirahÃ£ also failed to remember whether a box they had been shown seconds ago had four or five fish drawn on the top. When Gordonâ€™s colleagues tapped on the floor three times, the PirahÃ£ were able to imitate this precisely, but failed to mimic strings of four of five taps.

This sounds more convincing. Esp. the tapping seems hard to do without being able to count the taps (while listening and while tapping). When trying, I find it even impossible not to count.

### Would the strong version of the Sapir-Whorf hypothesis ever die!

What a load of utter nonsense. Take a peek at Daniel Everett's papers on the Piraha, in particular "Cultural Constraints on Grammar and Cognition in the Piraha" for a more sensible analysis of the language and the culture of the tribe.

A more probable explaination, though not the only possible one, for the way the language is, is that due to inbreeding, most of the Piraha suffer from something like SLI.

### HG Wells Land of the blind

Can't remember the actual name of the story, but the characters in the story chose to be blind because their world is one which everyone was blind.

### Wittgenstein

"The limits of my language mean the limits of my world". He means language as the medium of propositions, propositions as "pictures of reality". Without the language to make a proposition, the corresponding picture of reality cannot be formed. But this is not about the correspondance between individual words and concepts - a word is not in itself a picture, and has no necessary relationship with any picture (Saussure: the relationship between the signifier and the signified is arbitrary). Absence of a word for "rain" does not preclude the making of phrases like "the wet stuff that falls from the sky".

Number is possibly a special case. You can count all the way through the natural numbers with only 0 and S, although it gets very unwieldy very quickly. If you can't count up to five using only the words for "one" and "two", it's not because you don't have a word for "five" but because you haven't elaborated a counting technology that can count to five, for whatever reason. Once you have such a technology, you might start to give names to some of its forms.

Is there are reason why whenever people discuss Sapir-Whorf, they always talk about the importance of "having a word for" something? Why is "language" so often taken to mean simply "vocabulary"? It's a little like the arguments about genetic determinism: as if it were always a question of "having a gene for", say, being able to count up to five.

### The Undecideability of the Sapir-Whorf Hypothesis

"Any linguistic relativism experiment using language in any form is either inadequately designed or the results are biased."

### Nice

All of a sudden I find myself thinking back to a book by John Lloyd and (yes, that) Douglas Adams, called The Meaning of Liff.

It was a mock-dictionary of definitions for words that happened to be UK place-names. The authors explained that there were concepts for which we had no words, and plenty of spare words "loafing about" on roadsigns and ordnance survey maps doing no ostensibly useful work, and it made sense to try to pair them up and so enrich the language.

In this they unquestionably succeeded. Myself and my spouse still use the word "Goosnargh" to refer to any leftovers that have been in the fridge for more than 48 hours and thus have a miniscule and rapidly dwindling chance of ever being consumed by a human being.