Another view

Wallace Chafe chafe at HUMANITAS.UCSB.EDU
Fri Jan 10 18:38:04 UTC 1997


Try looking at it this way.

A language is fundamentally a way of associating meanings with
sounds (and/or some other symbolic medium).  Meanings (let's not get
hung up on the term) are mixtures of cognitive, emotive, interactive,
and intratextual information, covering all facets of human experience.
A language imposes on experience a huge, complicated set of meaning
elements and ways of combining them, just as it imposes sound
elements and ways of combining them.  Much of this is language-
particular, but some is universal for various reasons, only one of
which may be innateness.

One might be able to imagine a language in which meanings were
associated with sounds in a direct, unmediated way, but no real
language is like that, and the reason is that languages change.  In all
languages grammatic(al)ization, lexicalization, and the analogic
extension of patterns has produced situations in which functionally
active meanings are often symbolized in the first instance by partially
or wholly fossilized formerly functional elements and combinations,
whose own associations with sounds may nevertheless remain intact.
(I say partially fossilized because sometimes there is leakage back into
at least semiactive consciousness, as with awareness of the literal
meanings of idioms and metaphors.)

One linguist may look at this situation and say, "Aha, there's a lot here
that is arbitrary and nonfunctional.  Hurrah for autonomous syntax!"
Another linguist may look at the same situation and say, "There's a lot
here that is motivated, and when it seems not to be, and when we're
lucky enough to know something about how it got to be this way, we
can see that there once was a motivation that has now been obscured by
grammaticalization etc."  My own opinion is that we ought to be
looking for functional motivations wherever we can find them, and
that an autonomous syntax based on elements that have never had
anything but a formal, otherwise unmotivated status provides nothing
more than a way of feeling happy about a failure to probe toward a
deeper understanding, including in many cases a historical one.

There seem to be three major ways in which spinners of theories
connect with reality.  One is through observing how people actually
talk, one is through doing experiments, and one is through inventing
isolated sentences and judging their grammaticality.  Each has its
advantages and disadvantages, and improved understanding ought to
come from a judicious mixture of the three (as just emphasized by Lise
Menn), though unfortunately we are all biased by training and
experience to do mainly one, and sometimes even sneer at the others.
It may have relevance to this debate that there has indeed been a
correlation between the autonomous syntax approach and the use of
grammaticality judgments.  It looks, too, as if those who observe how
people actually talk tend on the whole to be the least enchanted by the
autonomous approach.  Dan Everett's remarks on this score are
particularly welcome, however.

As for parsing, where this discussion began, it's useful in illuminating
some of the patterns that exist in the intermediate area between
meanings and sounds.  But whether those patterns form an intact
skeleton that can be studied apart from the meat attached to it has
always seemed to me, at least, quite dubious.  In any case, if a machine
were ever truly to understand something that was said to it, its
understanding would have to be in cognitive, affective, and social
terms--in terms of all facets of human experience--which lie quite
beyond anything presently available in the computer world.

Wally Chafe



More information about the Funknet mailing list