Relations of Language and Thought: The View from Sign Language and Deaf Children

Relations of Language and Thought: The View from Sign Language and Deaf Children provides an interesting angle on the Sapir-Whorf hypothesis that we periodically discuss on LtU. A small sample from Google Books is available.

...Hypothesis concerning language and thought...:

  • Language equals thought. Perhaps the simplest view of language and thought is that they are essentially the same thing. This position is most frequently ascribed to American behaviorists, and especially to John Watson, who argued that thought is just subvocal speech.
  • Language and thought are independent. This view, most often attributed to theorists like Noam Chomsky and Jerry Fodor, suggests that the development of language and the development of cognition are distinct, dependending on different underlying processes and experiences.
  • Language determines thought. In the form usually identified with the linguistic determinism and linguistic relativity theories of Sapir and Whorf, this perspective directly entails a correlation between language skill and cognitive skill. One implication of this view is that individuals who have "inferior" (or superlative) language are expected to have "inferior" (or superlative) thought. Implicitly or explicitly, such a perspective has been used as a rationale for an emphasis on spoken language for deaf children by those who have seen sign language as little more than a set of pragmatic gestures.
...The more interesting question... is whether growing up with exposure to a signed language affects cognition in a way different from growing up with a spoken language. Indeed, that is one of the fundamental questions of this volume. While we fully agree... that any strong form of the Sapir-Whorf position appears untenable, it also seems clear that language can affect and guide cognition in a variety of ways. Much of what a child knows about the world, from religion to the habitats of penguins, is acquired through language.

Sign language is an obvious candidate for linguistic study, since the mode is visual as opposed to oral/aural. The summary of one of the authors is telling:

The conclusion that American Sign Language (ASL) is an independent, noncontrived, fully grammatical human language comparable to any spoken language has been supported by over 30 years of research. Recent research has shown that ASL displays principles of organization remarkably like those for spoken languages, at discourse, semantic, syntactic, morphological, and even phonological levels. Furthermore, it is acquired, processed, and even breaks down in ways analogous to those found for spoken languages. The similarities between signed and spoken languages are strong enough to make the differences worth investigating. In the third section of this chapter, I will argue that although there are differences in detail, the similarities are strong enough to conclude that essentially the same language mechanism underlies languages in either modality.

On a programming language level, I can't help but think that sign language offers valuable clues into the nature of visual PLs (though I haven't quite nailed down any specifics). ASL on Wikipedia informs us that signs can be broken down into three categories:

  • Transparent: Non-signers can usually correctly guess the meaning
  • Translucent: Meaning makes sense to non-signers once it is explained
  • Opaque: Meaning cannot be guessed by non-signers
With the majority of signs being opaque. As much as those who design visual languages would like them to be intuitive - falling into the Transparent and Translucent category - I figure you still have to end up using many signs that are only meaningful internally to the language at hand.

On a personal level, I have recently been attempting to delve into ASL. I've almost got the alphabet and numbers down, and have a vocabulary of about 100 additional signs - which probably means that I'm at the proficiency level of somewhere between ankle biter and sesame street. I do find it to be a fascinating language. I noticed when I was looking at the course offerings for college (my son started university this year) that ASL is now offered for foreign language credit (wish it had been offered when I was a student all those years ago).

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Why language is NOT thought

Why language is NOT equal to thought: a picture is worth a thousand words (if not millions).

I like Wikipedia's definition: "A language is a system for encoding and decoding information." A language is used as an information representation protocol. Thought represents information in its most direct form possible: sensory, or else cognitive, etc. Each person's brain learns a mapping from language to thought; they utilize it; they conform to it.

What!?

a picture is worth a thousand words (if not millions).

So you can replace a million word program, using a corpus of C programs in front of me and "wc" that's about a 344000 lines of C or assuming about 680 lines per c file... You're claiming you could replace about 500 c modules by one, say, UML diagram!??

And then EXECUTE IT!?

Nope! You way way way over estimate the usable information content of a picture.

Content Density

Perhaps we could take a snapshot picture of that C program and then write a computer program with character recognition?

UML is just another language with its own built-in symbols, syntax and grammer - and not a very dense one at that. It's aim is not so much information density as it is a communication shortcut within a specific domain.

The real problem with the phrase "a picture is worth a thousand words" is that it's usually used to judge the relative merits of visual images relative to language. I'd retort back that "a single word is worth a thousand images". And then I'd go even further and say that "a single word is worth thousand of other words". Indeed, one can not understand an image or words without reference to a history of countless other sensory experiences.

Possibly Try A Restatement, But Be Clear

What you say has no meaning without a clear operational definition of "thought". Thought you'd like to know that. (Sorry, couldn't resist).
And it is by no means apparent that the brain stores sensory input. And what is "cognitive, etc."?

Neither language nor thought are "pictures". If you believe they are then here's a thought experiment:

Think of a cat. What room is it in? Is it standing, sitting or running? What color is the cat and what is the color of it's eyes? Is it purring or meowing? How tall is it? What is it standing, sitting or running upon? etc.

Why!?

Why is my statement misinterpreted to refer to UML!? Oh brother, give me a break. A PICTURE takes up kilobytes or megabytes of storage -- information. If you were to describe the picture completely with words (or, for instance, logic expressions), in the same amount of detail -- a lossless compression -- you would surely use thousands and thousands of statements of fact about the picture and entities in them! (I'm talking photographs of, say, plants, animals, etc.)

And it is by no means apparent that the brain stores sensory input.

In spite of the fact that you can picture your house or apartment and describe it with much detail -- hundreds or thousands of facts, depending on how detailed you can picture it.

Perhaps there is a difference in the way the brain stores sensory representations of prototypical nouns, and the way it stores crisp memories of concrete nouns.

VPL vs. ASL

I don't think ASL is a good comparison for visual programming languages. ASL is built on a human body gesture space. VPLs are built on a display terminal. A better source of analogy for the linguistic potential of VPLs is probably CAD.

My own vague hunch about VPLs is that we haven't found the right calculus of composition. We're stuck in a kind of ad-hoc "widgets with output and input signals" world, for the most part.

-t

The fourth dimension

ASL seems to be built in four dimensions, X-Y-Z and time. It seems to be more typical to establish the context (such as I, you, they) first and then indicate a message about the actor identified in that context. As long as we continue to use two dimensional displays, VPLs will be unable to use the depth and time dimensions - though we can trick the eye into thinking that it is seeing depth (via shading and proportions) and time can be represented via direction (left to right, top to bottom, or with arrows).

Anyhow, I'm not sure that CAD offers any more help than the typical IO with lines in terms of establishing context and relationships. With a VPL still end up with a language that has to have a set of rules governing actors and their relationships. And although you might start with transparent symbols, eventually they will morph and become less intuitive - becoming opaque in the process. And if the Saphir-Whorf hypothesis is not correct, a VPL will be no more powerful than a text based one (though this is probably just a twist on all PLs are Turing Complete).

5+ dimensions

Non-contrived sign languages have surprising analogues to the grammar of film. Film uses the 4 dimensions of space-time (even if 3D space is simulated with 2D), in addition to cinematic techniques, like mise en scene and camera movement. In sign, while you can't simultaneously show complex sets as film can, you can progressively add "props" and actors in the 3D space, and even refer back them long after you set them up.

"Camera movement" is also common to sign language and film. You might represent two people, one chasing the other, with each hand, (a wide-shot, if you will), and then "cut" to medium shot of the pursuer-- the other person is "out of frame"-- and you use your full body to sign what they are doing or saying. You can then cut back to a wide, or through re-orientation of your body, cut to the other person.

There's a great short film in ASL called Vital Signs (http://www.youtube.com/watch?v=NzybkJx0Lo8) that perfectly illustrates these techniques.

I haven't found any academic research that looks into the relation between sign languages and film grammar, but given that there's some evidence that signing pre-dated spoken language, I suspect that we comprehend films so effortlessly due to visual communication being one of the things our brains have been doing for a very long time.

As for VPLs, I agree that sign languages offer clues as to how they might work well, but we're still stuck in Church-Turing Thesis land, so we can only hope to improve the ease of programming, not add power.

Understand vs Speaking...

I grew up with English as a first language. Later in life TV arrived with a second language occupying half the air time.

I made an interesting discovery....

I eventually had no trouble in understanding people speaking in the second language... but since I didn't practise the skill, I still had almost as much difficulty thinking or speaking in that language!

Takes a life time to understand any language

I know when I type, I don't think of individual characters at a time. For that matter, I already have in my mind what the entire idea is that I'm expressing - the words just flow out - limited in volume by my inability to type fast using two fingers.

In trying to learn how to sign words using the alphabet, my current lack of proficiency means that I have to consciously think of each letter and how it's represent with my hand. I am assuming that with constant use and drilling that eventually one can think of words and ideas and they will flow out the hands as if I was typing them on the keyboard.

So, I'd agree that unless it becomes second nature - you don't have to think about the mechanics of translating from one language to another - that you really won't be able to distill input from a language and use that same language for output. [Edit Note: from a PL standpoint, it's similar to thinking first how you would do something in Java when programming in Haskell].

Interesting subject.I speak

Interesting subject.
I speak fluently 3 languages, but still find myself very much at pain trying to even begin learning something like ASL which has very different mechanisms to it. While it is likely that the thesis of the authors of the book referred to is correct - I am not sure at all that the differences between gesture-based, multi-dimensional language like ASL and the spoken languages are so small. The multi-dimensional aspects (type of movement involving hands, arms, fingers; direction, speed, frequency, ...) make me think the expressiveness (as counted in meaning / unit of time spent expressing) is higher, and the approach makes it possible for multiple things to be expressed at once. It's not how much you can express, it's how much you can express per unit of cost.
I absolutely think knowing more about ASL can help better understand how to structure efficient gesture/visual based languages.

A fascinating little book written by Oliver Sacks on the subject - Seeing voices (http://www.amazon.com/Seeing-Voices-Oliver-Sacks/dp/0375704078/ref=sr_1_8?ie=UTF8&s=books&qid=1251993050&sr=8-8).

ASL

I'm not sure you'll find what you are looking for with ASL. I learned a bit of Signed English back at the AI Lab in the 1970s, and had some exposure to ASL which shares vocabulary with SE. To start with, both languages use words. After you gain some facility in SE or ASL you find yourself translating those words into muscle movement sequences, just as you do when you speak or touch type. When you first watch SE or ASL, you find yourself having trouble picking out the component gestures, just as one finds it hard to pick out the sequence phonemes and syllables in a spoken language. After a while, you recognize the words and can ignore the co-articulation. (SE and ASL both use some pointing, largely for pronouns, and there is a fair bit of word play using spatial puns in which words or phrases with similar spatial representations are cross played.)

That Wikipedia taxonomy sounds pretty bogus to me. You could make a similar taxonomy for French with its cognates as transparent, faux amis (e.g. deception) as translucent and the rest of the vocabulary as opaque, but what you'll have will be of limited use.

Amusingly, I was working on visual programming languages at the time. (I developed EOM, a fractal oriented graphical language.) My impression then and now is that you'll do better exploring doodles, art history and computer games if you seek inspiration for visual programming languages. Really.