Analytic languages and their function. (4)

A. Katz amnfn at
Sun May 28 06:53:09 UTC 2006

Steve Long wrote:

>  But for the purposes of this discussion re typology -- in
>theory-- pidgins represent the closest thing with we have to a language
>with the bare minimum of grammatical features.  ("...nouns, verbs,
>adjectives, and so on tend to have only one invariant phonological
>form... The verbal root itself does not express any particular tense or
>aspect. When used in a clause, it is up to the listener to interpret this
>aspect of meaning in accord with the context.")  That is close enough to
>the form I'd suspect early languages would have taken.  The point is that
>form is more in the direction of analytic languages.

The fact that you "suspect" early languages would have taken a form with
"a bare minimum of grammatical features" is very much influenced by the
typology of the language you speak. Babies learning English start out with
a very isolating beginning. Monolingual English speakers, when
they start talking, do sound like Tarzan. But this is not
universal. According to Dan Slobin, Turkish speakers start
out with inflectional morphology at the one word level. As soon as they
start speaking, it's inflected all the way.

In the present, all humans are capable of speaking isolating languages if
they are exposed to them, but isolating languages are not necessarily
easier to understand or to come up with from scratch. Evidence from home
sign shows that systematicity tends to get built into early systems of
communication, because it's easier to remember a sign if it patterns with
another contrasting one.

Isolating languages are more context dependent. For sophisticated
urbanites, that's not a problem, But do you assume early man already had
a theory of mind as developed as that of the average modern day Chinese or
English speaker? Grammar allows language to become autonomous from theory
of mind. While no language is context free, grammar allows us to spend
less of our resources on mind-reading.

>Bee communication is extremely sophisticated -- especially because it is
>truly representational -- especially in terms of mapping distance and
>Wild chimps exhibit nothing as sophisticated in terms of intricate
>Bird calls are far more intricate than any.
>The underlying mechanisms are of course quite different.

I'm not sure what your point is in bringing up the commmunicative
abilities of bees and birds. If it's meant to dispel the notion that only
our close relatives have sophisticated systems of communication, I
take no issue with that.

Bees -- if their dance language is to be credited (and there apparently is
some controversy about that among the bee experts) -- are communicating
accurately without a theory of mind, in a highly structured way. If bees
can do this, do you think early man couldn't? Is that your point?

><<Meaning abides in contrast. The idea that we could have amassed a
>language out of individual monomorphemic words, one word at a time, is
>Come on.  Day and night is a contrast.  You don't need affixes or even
>compounds to discriminate in plain words between two things.

Now imagine a language with only two words. How likely is it that the
contrast chosen would be "night" and "day"? Of what use would this two
word language be to anyone?

Given a single binary contrast to begin with, one might suppose that "yes"
and "no" would be good candidates, but in reality, they're not. Try
getting a chimp to use "yes" and "no." It's very hard, because the
concepts are extremely abstract. (I speak from experience.) It's easier to
teach the three part contrast of "banana", "grape" and "apple", than the
single binary contrast of "yes" and "no." But you have to be very careful
when you introduce choices such as "grapes" or "apple." You might think
you are teaching nouns. But the chimp will interpret it as "give me a
grape" or "give me an apple", more often than not. You can't teach
separate nouns as nouns until you have a much bigger lexicon. Until that
time, each word stands for a full proposition.

This is not due to mental limitations of the subject. It's due to the way
information theory works. The meaning of words is determined by the
contrasts available. It would come out the same way if the research
subject were an uneculturated human.

My point is that a single contrast isn't enough to make a language out of.
A bigger inventory is required. But you'll never get to a bigger
inventory, if you try adding one word, or even one contrast at time.
Speakers will refuse to use them, because they won't seem meaningful.
"Yes" and "NO" make sense only in the context of a very rich system of
already available propositions. You have to start with propositions
first, then work your way back to names for single items. Once you have a
meaningful array of relevant (to the speakers) choices,
then it makes sense to build a grid, a paradigm, based on the types of
signaling devices availabe to you. It's easier to remember contrasts this way.

When I say that early language was probably highly structured, I'm not
suggesting it was anything like Latin. Tense may not have been one of
things coded for. But it's unlikely there were any monomorphemic
phonological words. That would have been too inefficient to function,
given a small lexicon.



Dr. Aya Katz, Inverted-A, Inc, P.O. Box 267, Licking, MO
65542 USA
(417) 457-6652 (573) 247-0055

More information about the Funknet mailing list