Rules vs. Lists

A. Katz amnfn at
Mon Jul 7 13:05:42 UTC 2008

Okay. Your point is that the linguistic pie can be sliced many, many
different ways. I don't disagree, but I have another point that I have
been trying to make: there is no difference between one method of
slicing it or another, when we are studying how a language works. If it
all adds up correctly, all the different ways are equivalent, and there's
not any reason to prefer one method over another, unless we have adopted a
particular constraint, such as economy of rules or mathematical elegance.

Now, a particular speaker may adopt one way, and another speaker may adopt
a second. A third speaker may adopt a third. There may be as many
different ways of parsing a language as speakers, although that is
doubtful and perfectly open to scientific investigation.

It's okay to study the details of how speakers process language. It is
also okay to find ways to describe language apart from speakers. What is
not okay is to confuse what any given speaker does with how the language



On Mon, 7 Jul 2008, Rob Freeman wrote:

> Aya,
> You seem to have taken too seriously my little joke that no society
> would be sufficiently innovative to want only two conventionalized
> forms of speech. I'm sure there are all kinds of cognitive restraints
> which favor shorter sequences of more symbols. I always remember the
> Japanese colleague who said despairingly of English "The letters are
> easy, but there are just so many of them all together" :-)
> It would be a fun conversation to talk about what cognitive restraint
> fixed our common arithmetic base at exactly the most common number of
> fingers. Equally I would like to see how you allocate tone to a vowel
> in Chinese without first knowing the word. But I fear that all such
> argument about one systematization or another might take us away from
> the point I am trying to make here. The point I want to focus on is
> that, whatever your classification of elements, it may be possible to
> find more rules over combinations than there are combinations in the
> first place.
> -Rob
> On Mon, Jul 7, 2008 at 12:20 AM, A. Katz <amnfn at> wrote:
> > Rob Freeman wrote:
> >
> >>What we have at root are a number of utterances with a certain amount
> >>of variation between them. You need that variation to carry a signal,
> >>as you say. But the number of lexemes you allocate will depend on
> >>where you slice that variation. Which slice of variation you allocate
> >>to lexemes, which to phonemes etc. To an extent it will be arbitrary.
> >>The distinction between a phoneme and a lexeme is not so clear in, for
> >>instance, tone languages.
> >
> > Why do you think the distinction between a phoneme and a lexeme is not so
> > clear in tone languages? Isn't tone just one attribute out many that a
> > vowel can have?
> >
> >
> >>That said, if we decide the slice of variation we allocate to lexemes
> >>corresponds broadly to conventionalized meanings, it seems reasonable
> >>to me that there will be a fairly consistent number across cultures
> >>(perhaps tending a bit higher in highly conservative cultures.) You
> >>could certainly get by with only five. Computers use only two. But I
> >>doubt there will ever be a culture sufficiently innovative that it
> >>will want to think of new things to say quite that often!
> >
> > The fact that we can productively encode the information available in any
> > utterance of any language using a binary code as in a computer does not mean
> > that there are any human languages that actually employ a binary code of
> > contrasts.
> >
> > The fact that we favor the decimal system over binary in our numerical
> > calculations has something to do with the limitations of our working
> > memory. For the same reason, there are no languages with only two
> > phonemes, (much less just two morphemes or two lexemes or two clauses).
> >
> > Human language doesn't work that way in real time due to processing
> > constraints.
> >
> > Best,
> >
> >     --Aya
> >

More information about the Funknet mailing list