Rules vs. Lists

Rob Freeman lists at chaoticlanguage.com
Sun Jul 6 06:20:44 UTC 2008


Aya,

We have to be careful what we regard as "examples". As I said to Jouni
phonemes should be thought of as classes not examples. Similarly
"roots", "lexemes", "morphemes" etc.

When speculating there are more rules than examples the real question
is not how many ways can you combine X no. of lexemes, but how many
lexemes can you abstract from Y utterances.

And before you do that you have to define what you mean by "lexeme".

What we have at root are a number of utterances with a certain amount
of variation between them. You need that variation to carry a signal,
as you say. But the number of lexemes you allocate will depend on
where you slice that variation. Which slice of variation you allocate
to lexemes, which to phonemes etc. To an extent it will be arbitrary.
The distinction between a phoneme and a lexeme is not so clear in, for
instance, tone languages.

That said, if we decide the slice of variation we allocate to lexemes
corresponds broadly to conventionalized meanings, it seems reasonable
to me that there will be a fairly consistent number across cultures
(perhaps tending a bit higher in highly conservative cultures.) You
could certainly get by with only five. Computers use only two. But I
doubt there will ever be a culture sufficiently innovative that it
will want to think of new things to say quite that often!

So the question of how many lexemes is largely one of how we choose to
label the regularities we find.

What I am suggesting is more basic than that. I'm suggesting that
maybe when we break down utterances we have more regularities than we
have thought to look for before, however we choose to label them. I
don't think it is a question of redundancy, though all that extra
information could be used to make the signal more robust.

-Rob

On Sat, Jul 5, 2008 at 11:06 PM, A. Katz <amnfn at well.com> wrote:
> I assume that "the system" under consideration would be all inclusive of
> every item and every level, so this seems fair, although it's Rob that is
> leading this discussion on more rules than examples.
>
> Jouni Maho, you are implying there are roots, so as well as the lexicon,
> there would be a list of roots, presumably, and these would add to the number
> of rules.
>
> If there are roots, then presumably each root could appear with each
> suffix, (unless there's an additional rule that says that they can't) and
> there should be more lexemes than you listed.
>
>
> The question that seems more interesting to me is: could there ever be a
> human language with only five lexemes? If there could, why haven't we
> found one like that?
>
> Language is an information bearing code. The number of contrasts helps
> determine the amount of information transmitted. If there are fewer
> phonemes, then words have to be longer. If there are more phonemes, the
> same information can be transmitted in shorter words. More
> grammatical syntax allows for the same information to be coded in
> shorter sentences, in terms of word count. Less grammatical
> morphology requires more words per sentence. It all evens out, based
> on a very simple calculation. Languages of the world deploy the same
> basic phonological inventory inherent in our physiology in different
> ways in order to transmit about the same amount of information per
> time unit. Every language codes for a certain amount of redundancy in order to
> deal with noise in the signal.
>
> Redundancy could be viewed as adding extra rules that don't directly help
> with transmission of information. Is that what you are getting at, Rob?
>
> Best,
>
>     --Aya Katz



More information about the Funknet mailing list