? "Vocabulary Density"
ECOLING at aol.com
ECOLING at aol.com
Sun Aug 8 02:38:38 UTC 1999
[ Moderator's note:
In general, I do not allow cross-posting between these two lists, because
those who subscribe to both receive two copies, which in turn makes it
difficult for those who read both to keep straight where a reply will be
seen. However, since in my opinion this is a very important query, I will
cross-post to both groups any responses that come from either group, and ask
that those who read both bear with this departure. If there is very much
discussion--more than a half-dozen responses or so--I will find a different
way to deal with it that avoids needless duplication.
--rma ]
A query to those of you interested in language evolution
and deep language relations...
Has anyone analyzed in a careful technical way a question
whether modern languages have IN NORMAL USE
a greater density of vocabularies across the same size
meaning space than reconstructed languages?
Is there any conceivable way to separate that question
from the influence of what is reconstructible vs. what is lost?
The result is not at all a foregone conclusion,
as much vocabulary having to do with practical traditional
activities, such as shipbuilding, fishing, agriculture,
animal husbandry, is going out of use among those raised
in cities who have no experience with such activities,
and no reason to discuss them, even to hear discussions of them.
And it is not a foregone conclusion, because one might wish
to argue that the semantic space of modern life is itself larger
than the semantic space of hundreds or thousands of years ago.
More concepts, more words to refer to them. The density could
thus remain constant. But does it in fact?
Would we not have to make matters comparable by studying
density (saturation?) of vocabulary across traditional semantic
spaces which may remain constant?
Or is it circular, that the amount of vocabulary available and in
ordinary use is the DEFINITION of how "large" the semantic
space is, so that distinct terms retain a constant "semantic distance"
from each other? I would think that is a purely circular way of
reasoning, and cannot be valid.
I'm not sure this question can be addressed in any useful way,
given the biases of how we get access to the facts.
And I will be happy if I am proven ignorant by someone
citing some good references.
The reason it seems important to me is that the density of
vocabulary across semantic space may be an important
factor involved in causalities of language change.
If Livermore Labs can use its nuclear engineering skills
to analyze traffic flow dynamics using concepts of physics
(metaphorically gases, liquids, solids; state-transitions etc.),
why cannot linguistics develop some similar more technical
concepts and analyses. Even the field of Voting (Electoral)
systems is now becoming quantified and structured with
causally (?) related variables.
Best wishes,
Lloyd Anderson
More information about the Indo-european
mailing list