complexity measures

David L Fertig fertig at acsu.buffalo.edu
Mon Jan 19 14:06:45 UTC 1998


----------------------------Original message----------------------------
 
I can imagine someone arguing along the following lines that under
conditions of normal transmission the overall complexity of every natural
language will tend to remain near the maximum that the human brain is
capable of handling.
 
1. Some linguistic changes are "random" from the point of view of
complexity, i.e. complexity/simplicity plays no role in their motivation.
(The side effects of changes that ARE motivated by considerations of
complexity/simplicity would be included in this category.)
 
2. Although individual changes of this type can of course either increase
or decrease complexity, on balance they are bound to increase complexity.
(This is the "entropy" that Robert Whiting discusses in his post. It's
easy to see that it has to be true, since in any system there will always
be more logically possible changes that increase complexity than that
decrease it.)
 
3. Changes that are motivated by considerations of complexity/simplicity,
on the other hand, are only activated to keep languages from exceeding the
complexity limit, i.e.  from becoming unlearnable or unusable. Their
effects still leave a language quite near maximum complexity. Together,
random and "natural" changes will thus tend to keep all normally
transmitted languages very close to the human limit for overall complexity
(and therefore roughly "equally complex"). And even after episodes of
non-normal transmission (such as pidginization), changes of the first type
will gradually restore a language to maximum complexity.
 
I'm not yet sure if I really buy this argument myself, but I'd be
interested in reactions, comments, references.  Obviously, if there's
anything to it, all the details remain to be addressed, in particular the
issue of global vs. local complexity and the significance of the kinds of
sociolinguistic factors that Miguel Carrasquer Vidal discusses in his
contributions to this discussion.
 
While I'm at it, let me throw out one more line of reasoning that arrives
at the same conclusion:
 
Languages (or speakers or learners) do not tolerate purposeless
complexity. It seems to me that this is just a generalized version of the
constraint on acquisition that Clark calls the "Principle of Contrast",
a.k.a. "no exact synonymy". When faced with any complexity, learners will
either figure out a function for it or eliminate it. This means that all
linguistic complexity serves some kind of purpose.
 
There is no logical limit to the amount of complexity that can potentially
be put to good (cognitive and/or communicative) use, and consequently
humans will tend to let their languages become as complex as their brains
can handle. In other words, up to a certain point, speakers/learners will
tend to deal with complexity by putting it to use, only when it goes
beyond that point will they deal with it by eliminating it.
 
 
Thanks in advance for any comments.
 
David Fertig



More information about the Histling mailing list