Comments: To: lfg at list.stanford.edu

Bruce Mayo Bruce.Mayo at uni-konstanz.de
Fri May 31 14:13:48 UTC 1996


Ron Kaplan writes (30.5.96)
>Avery has two main concerns about the linear logic proposals for LFG
>semantics:
>
>1.  Power:
>2.  Spirit of LFG:

        Not being a theoretician, I'm sure there are aspects of this
discussion I don't appreciate, but for what it's worth, let me give a
programmer's thoughts:
        1: Arguments against powerful formalisms on the basis of their
complexity/poor efficiency are misleading. Whatever formal representation
you use as input to a model-building implementation (whether computer or
brain simulation), it has to be passed through a compiler that, using lots
of optimization techniques, can reduce the problem to something like its
inherent complexity. E.g., a good compiler will turn a determinate finite
automaton described in Prolog (based on a formalism that is O(exp n))  into
a program that is merely O(n). (There has to be a compiler - I don't think
anyone wants to say that the brain is actually a linear logic engine, do
they?) The formalism is to help people understand the problem, not for
actually doing the computation.
        2: Arguments against powerful formalisms based on their
representational vagueness seem to me, however, quite legitimate (I think
this was Ron Kaplan's main point). The more powerful the formalism, the
less it tells you about the thing formalized. Decades of experience with
programming languages have shown that very compact, powerful formalisms,
despite their aesthetic appeal, are generally not good for building large
and complex models. Software engineering preaches the virtue of having
programming formalisms that look as much as possible like the objects your
program describes - so if you're dealing with lists, you have a list
formalism, but you use other formalisms for structured data objects,
control structures, logic structures, etc., and you make as many
constraints on these explicit as you can (e.g. with type, range, precision
declarations).
         From the standpoint of building a working model of a natural
language processor, I think the overall formal structure of has to present
a good match to the structure of the available linguistic data gathering
procedures. In a natural language system your're going to have vast
quantities of often arbitrary data - it's not for nothing that we spend
years and years learning languages - and you've got to be able to check
empirically available facts quickly and reliably against their formal
representations. So if you can get good and reliable data about surface
constituency in a language, context-free production rules give you a good
shorthand for those data. If you have clear functional valency data,
LFG-style argument structures can express these clearly. Keeping the formal
power of each of these domain formalisms low helps the data gatherers avoid
mistakes because they can't say things that wouldn't be meaningful in the
domain.
        As a help for the compiler writer, it would be good to know about
the formal complexity of each of the descriptive domains, but it would be
very troubling indeed if the complexity of the whole system - sentence
surface structure to conceptual structure - turned out to be anything less
than fully Turing-equivalent, would it not?


Bruce Mayo, FG Sprachwissenschaft
Universitaet Konstanz
Postfach 5560 D 177
D-78434 Konstanz
eMail:  bruce.mayo at popserver.uni-konstanz.de
Tel.:   (+49) 7531-88-3576






More information about the LFG mailing list