Grammar with a G

Rob Freeman r.j.freeman at usa.net
Fri Apr 2 09:48:08 UTC 1999


david_tuggy at SIL.ORG wrote:

>      Rob Freeman wrote:
>
>      ****
> I was hoping some more debate might come up on the (abstract) merits of analogy,
> ... But as none seems forthcoming just a final comment on 'reductionism'. I don't
> see basing examples on syntactic abstractions (the usual idea of G-grammar) as
> inherently less reductionistic than basing syntactic abstractions on examples
> (which is analogy). Even where 'reductionism' might be thought of as bad, which is
> by no means always, grounding in examples is simply not more reductionist, if
> anything it is less.
>
>     ****
>
> As usual, Langacker's position makes a lot of sense to me (e.g. Foundations of
> Cognitive grammar vol. I (1987), 445 ff.) Analogy, if examined, turns out to
> necessarily involve perception of a similarity

Absolutely, I think of them interchangeably. And similarity seems just the inverse
of contrast, which is why I link analogy and SFG.

> ...and that similarity, to the
> extent that it is established in the language through usage, is a rule (a schema,
> in Langacker's terminology) that can be used by speakers to produce new forms. You
> have BOTH examples produced by rule, and the rule based on examples. When the
> schema (rule) is not yet established, using analogy necessarily involves
> activating the schema, precisely the kind of usage that will establish it. It is
> not a reductionist account, in that both mechanisms are expected and allowed for,
> and it shows rule-based and analogy-based accounts to differ only in degree, not
> in kind. "The distinction comes down to whether the schema has previously been
> extracted, and whether this has occurred sufficiently often to make it a unit
> [=established cognitive structure]."

This looks to me like the approach traditionally followed in the application of e.g.
Neural Networks to language. The key assumption is that the structure you need to
find is finite (an assumption which probably came from generative linguistics, but
which suits the profile of problems appropriate for back-propagation networks and so
is adopted naturally for them - the big problem with NN's, I think). My view is that
this assumption of a finite number of key patterns in the data to be extracted as
rules is a mistake. I think there are many generalizations which can be made about
the data and we need to be able to get to any, or any level, of them.

Neural Networks (back propagation nets, anyway?) work in this way by learning to
segment data into a fixed number of classes. These classes would be your rules. But
I see the classes, the possible groupings, as being directly synonymous with meaning
(an 'organization of experience') not as just some finitely characterizable
structural step on the way to meaning. Where our classification of experience is
finite (like in our division of time into tenses) finite classifications work fine -
hence the success of the finite classifications NN's do for modeling English Past
Tense, you can count out the classes you need and train your network to recognize
them. But general syntax seeks to code general meaning, so the groupings (or
classifications) possible must remain open (to represent the new meaning always
being created). We need to keep the examples themselves to a high level of detail so
that we can shuffle their groupings (classifications) around to represent subtle
shifts of meaning.

If you apply 'rule-finding' protocols (usually networks) to the data you will find
any number of regularities, which will be synonymous with the multiplicity of
'grammars'. But any one of these will only ever be one slice of the subtlety of
structure, and thus meaning, of which the language is capable through collections of
examples.

So, in conclusion, you have put your finger on what I think has been the big problem
with analogical models of language hitherto. They have been looking for finite
classifications of the data they seek to model. Essentially still trapped by the
generative grammar style of thinking about language system. We need to start to see
syntactic structure as vectors of examples pointing to meaning, not as algebras of
finite abstractions, then we will know what to do with our networks.

Rob



More information about the Funknet mailing list