The Reality of Sentences2

Rob Freeman rjfreeman at EMAIL.COM
Mon Apr 14 23:57:10 UTC 2003


Steve,

On Monday 14 April 2003 5:23 pm, you wrote:
> In a message dated 4/13/03 8:22:39 PM, rjfreeman at email.com writes:
> << Exactly, that is why I say Tom was confused. He was opposing these two
> different things as if they conflicted. Generative grammar rules are rules
> describing competence. Why then does he compare them with a system
> (emergent generalization) specifying performance.  >>
>
> I'll let Prof Givon respond as and if he chooses.  But I think if you take
> a close look, you'll see you are having your cake and eating it too.  If
> one admits the generative model cannot exclude performance, it is hard to
> see why one would turn around and use performance in distinction to a
> generative model.

I think it was a describes/explains thing. As I understand it the generative
model described performance without ever pretending to explain it. That's how
I understand the distinction, anyway.

There needs to be a performance for it to be described, but the description
is not the same as the thing itself.

An analogy is a sketch, which describes an object, but nobody should mistake
it for the real thing.

Another analogy I like is an animation, where you can see movement, but the
movement doesn't exist.

> The comparison TG made is valid precisely because that the distinction
> between competence and performance is artificial and non-operational.  And
> there's nothing in the generative model that prevents it being a model for
> performance.

Then it would be invalid precisely because there is a distinction. That's a
core tenet of generativism, isn't it?

In latter years people tried to build generatively motivated models of
language performance. But such rule-based performance models have proved
unsuccessful.

As I said, far from 90% I would be surprised if Tom can produce a _single_
sentence which is comprehensively described by grammar rules. 90% is more
like the number which can be _partially_ described by any single consistent
set.

Papers like the classic by Pawley and Syder give a good discussion of this
failure of rules to completely describe (let alone explain) performance.

> Generative grammar rules MUST describe performance, no matter what the
> theoretical positioning.  There is no such thing as "rules describing
> competence" that exclude performance.  Because "competence" refers to
> nothing else than the capabilities and constraints on performance.  It has
> no meaning otherwise.  The only contact with reality generative grammar has
> is in its relation to performance.  If it has nothing to say about
> performance, it has nothing to say.

Yes, but description need not be complete, and it need not explain.

> It is preposterous to say a model that states a human is "competent" to
> jump 20 feet has nothing to do with a model "specifying" that a human can
> "perform" a 20 foot jump.  And so there is no reason to distinguish models
> on that basis.

But your own point in your last message was that what a system can do is
largely independent of what a system is. "Where legs are not available,
wheels might do a fine job." You presented that as an example of the
distinction between function and form, but I think it illustrates the
distinction between competency and system equally well.

Once again the animation analogy is a favourite of mine. The system moves,
but the movement of the system is lateral to the perceived movement.

For an example of how description is different from system which I feel has
particular relevance for language, consider a system which consists of a
group of people, say 10 sportsmen and 5 academics. Now lets say that this
group also contains 7 men and 8 women. What is the true nature of this
system, man/woman or sportsman/academic? Both perspectives describe a
competency of the system (to be split into two), but in general both
competencies will not apply at the same time. Organize the system according
to profession and you will disorganize it w.r.t. gender, and vice versa.

The classes are competencies of the system (spit into man/woman or
sportsman/academic) but the nature of the system is a bunch of individuals
and it is irreducibly to be completely modelled only as such.

If you try to model such a system based on one competency or another (let
there be two classes of people X and Y) you will always be chasing your tail.

Description is different from system.

> A completely different problem is confusing "emergence" with "performance".
> They are not congruent.  Once again, "emergence" classically is the
> situation where a new combination is greater than the sum of its parts.  No
> matter how much performance may involve new, emergent elements, it does not
> need to.

It does not need to, but it can, and (under communicative and social pressure
I'm sure) it does. And it can specify those new elements down to the level of
performance. Therefore it can be seen as a performance model.

Though I am not sure if it is seen so classically. I am mostly interested in
emergent paradigmatic categories, not emergent syntagmatic elements, and not
even emergent paradigmatic elements, but categories, like those traditionally
called noun and verb. Such emergent categories allow you to specify syntax
very exactly, and flexibly.

> In the inherently ruled phenomenon of language, there is
> obviously an ubiquitous element that cannot be called emergent.  In fact,
> generative grammar can be seen as a model describing the non-emergence
> element of language.

Yes, I agree with that. I don't think the actual rules of generative grammars
tell us much, though, the forms that are described; only the parameters they
have in common, the types of forms, their connectedness or "topology", if you
like.

I think it is probably just the fact that they specify generalities in terms
of context free substitution which is important.

Identifying that "connectedness" is to my mind a concrete contribution of
generativism. It indicates the "principles and parameters" which need to
channel our model of language, without saying anything about the engine, the
driving force of that model, which I think is the urge to generalize.

> Once again, the glaring problem I think you've run into here is the big
> blind spot caused by structural analysis.  The difference between the
> "generative" and "emergent" elements of language appear to reflect two
> different functions of language structure.  Unless one separates these
> functions, the validity of one element always seems to lose out in theory
> to the other.  Both elements are functionally valid and language structure
> may be seen as at best a compromise between the two.

I don't think this has anything to do with the functional/structural
distinction. Functionalism is a shoe in for this theory though, because it
gives contrast, the systemic network, priority over category. So the
underlying theory of Functionalism says the same thing as I am saying, that
category is a description, but is not fundamental.

I have a hunch the the contrasts of a systemic network are just the flip side
of the generalization which I see as the engine of emergent structure. It was
this parallel which first got me interested in Functionalism.

> And, on second thought, it is interesting to contrast "competence" versus
> "performance" in an operational, synchronic way.  We have the hypothetical
> language "competence" of non-human primates with its apparent constraints
> on language structure.  Then we have, a little later on the evolutionary
> tree, human language performance that "out-performs" that earlier
> competence.  Does this mean the generative model is wrong from an
> evolutionary perspective?  Of course not.  What it tells us is that human
> linguistic competence -- generative grammar -- is not a static event, but
> an evolving one.  And that suggests understanding how that evolution
> occurred is key to understanding human language.

I don't think generative grammar rules themselves are a product of biological
evolution, though perhaps their "context free connectedness" is. I'm sure the
actual forms of language are the natural result, and a reflection, of the
human tendency to find order in the world. Just as Functionalism says
language is an expression of our tendency to find meaning in contrast. The
search for order in the case of language is perhaps parameterized according
to "substitution independently of context" (CFG). There is a bunch of vocal
specific stuff, there is greater complexity in the order found by humans, but
other than that I think the parameters on the generalizations made (or
alternatively the contrasts) is the only inherited component.

That's just a hypothesis, of course.

-Rob



More information about the Funknet mailing list