theory and formalism in LFG: too much inertia? (long msg)
Joan Bresnan
bresnan at csli.Stanford.EDU
Fri Sep 10 18:15:11 UTC 1999
I hesitate to start another discussion of the philosophy of science,
which linguists, in their naive scientism, are generally so boring
about. Nevertheless:
>>>"Yehuda N. Falk" said:
[...]
>
> For my second act as list maintainer, I would like to issue a plea
> concerning the list. This is supposed to be a *discussion* list, a way for
> us to discuss, debate, and argue with each other, in a very open and public
> fashion. However, over the past year or so the list has served as little
> more than a forum for conference announcements.
Researchers in LFG recognize, I think, that there is a difference
between a cognitive theory and a formal architecture for modelling
cognitive processes. The formal architecture will be useful if you
can use it to model a very wide range of hypotheses; it should have
great expressive power. But this makes it less explanatory as a
theory, because, as Frege said, as the extension of a concept expands,
its content diminishes; when the extension becomes all embracing, the
content vanishes altogether. In this sense, a powerful formalism is a
weak theory.
A powerful explanatory theory would be one which explains why things
are the way they are ***and not otherwise***. Linguistic theories
have usually aimed for explanatory adequacy in this sense. They
deliberately try to limit the range of possibilities to just those
which are "natural" and empirically justified. However, many
linguists misunderstand the relation between theory and formal
architecture for modelling/expressing theories. They try to have a
very restrictive theory, and build the restrictions into their formal
architecture. Then when they need to express new ideas which change
their original assumptions, the whole formal architecture has to
change as well. From an engineering standpoint, as well as good old
common sense, this is very inefficiant. On the other hand, a flexible
formal architecture which allows you to describe everything and
anything is not sufficient as an explanatory theory.
There is of course a relation between formal architecture and
substantive explanation: for example, some architectures may be
incapable of allowing you to express important empirically motivated
generalizations.
LFG has separated its very powerful formalism (which lets us describe
many phenomena, natural and unnatural) from the various substantive
theories of argument realization, phrase structure typology, etc.,
which attempt to explain the limited typological ranges found in
natural languages. The latter have generally been treated as
metatheories, and have seldom been incorporated into the formal
architecture. This means that formalization has lagged behind the
development of substantive theories. From the engineering standpoint,
the inertia of the formalism has been valued. But from the standpoint of
cognitive theory, I wonder, Is this a good thing?
The preceding rationale for splitting theory and formalism assumes a
the inventory of relevant concepts crystallized in the formalism is
static. The linguistic metatheory is essentially viewed as a
restriction of the formalism. But we are living in dynamic times,
cognitively speaking: the old MIT-style generative epistemology which
held sway when LFG was invented is being overtaken by alien concepts
such as optimization, probability, comparative grammaticality,
markedness, and the like. These are turning out to have interesting
consequences for explaining typologies. This suggests that we
need to put more effort into enriching and expanding our formal
architectures as a top priority, no?
JB
More information about the LFG
mailing list