16.1454, Disc: Re: A Challenge to the Minimalist Community
LINGUIST List
linguist at linguistlist.org
Mon May 9 05:41:25 UTC 2005
LINGUIST List: Vol-16-1454. Mon May 09 2005. ISSN: 1068 - 4875.
Subject: 16.1454, Disc: Re: A Challenge to the Minimalist Community
Moderators: Anthony Aristar, Wayne State U <aristar at linguistlist.org>
Helen Aristar-Dry, Eastern Michigan U <hdry at linguistlist.org>
Reviews (reviews at linguistlist.org)
Sheila Dooley, U of Arizona
Terry Langendoen, U of Arizona
Homepage: http://linguistlist.org/
The LINGUIST List is funded by Eastern Michigan University, Wayne
State University, and donations from subscribers and publishers.
Editor for this issue: Michael Appleby <michael at linguistlist.org>
================================================================
To post to LINGUIST, use our convenient web form at
http://linguistlist.org/LL/posttolinguist.html.
===========================Directory==============================
1)
Date: 06-May-2005
From: Emily Bender < ebender at u.washington.edu >
Subject: Re: A Challenge to the Minimalist Community
2)
Date: 06-May-2005
From: Martha McGinnis < mcginnis at ucalgary.ca >
Subject: Re: A Challenge to the Minimalist Community
3)
Date: 06-May-2005
From: Paul Kiparsky < kiparsky at csli.stanford.edu >
Subject: Re: A Challenge to the Minimalist Community
4)
Date: 07-May-2005
From: Seth Cable < scable at mit.edu >
Subject: Re: A Challenge to the Minimalist Community
-------------------------Message 1 ----------------------------------
Date: Mon, 09 May 2005 01:38:38
From: Emily Bender < ebender at u.washington.edu >
Subject: Re: A Challenge to the Minimalist Community
I would like to respond to Carson Schütze's motor vehicle analogy,
from LL 16.1439:
>Consider the following analogy. You and I both are given the task of
>designing a motor vehicle that will get someone from point A to point
>B. You come back with a Corvette, I come back with an SUV. Now
>you say, ''Let's go to a racetrack, I'll bet I can drive a circuit faster
>than you, which means I have the better design.'' I will of course
>object: speed was not specified as the desideratum of the vehicle.
>Both vehicles can get a person from A to B. Moreover, the SUV can
>do lots of things the 'vette can't: carry more than 2 people, hold lots
>of luggage, play DVDs for the back seat passengers, transport
>moderate-sized pieces of furniture, host a small business meeting,
>etc. My motivation in designing it was to make it a multi-purpose
>family vehicle. If I were now to go back to the drafting table and
>modify my SUV design so that it keeps all its current features but can
>also go as fast as a Corvette, surely I will have achieved a much
>more difficult task than the person who just designed the Corvette.
If I've understood the point of this analogy, it is that building a system
which can take UG and some natural language input and produce a
grammar which can be used to assign structures to (at least the
grammatical) strings in some corpus of language is somehow outside
the original point of what P&P was trying to do.
I agree with Asudeh here: Even setting aside for a moment the
problem of learning (i.e., the process of getting from UG to a specific
language grammar), the ability to take strings and assign them
structure constitutes at least part of the getting from A to B. Most P&P
work (especially that within the Minimalist Program) works at a level of
abstraction that seems to preclude working on the details of assigning
structures to actual strings. This requires handling not only the
phenomenon of interest, but its interaction with everything else
required to assign structure (and meaning). Deducing that wheels
and a transmisson are both required for travel from A to B is only part
of the solution.
Work in the theoretical frameworks that do benefit from interaction
with computational linguistics (e.g., LFG, HPSG, CCG) has repeatedly
shown the benefits of getting computers to keep track of all of the
parts of a grammar so that the linguist can ask questions like: If I
switch to this analysis of case, what other changes does that require
in my grammatical system? Or, at the level of requirements on the
formalism (and from the perspective of HPSG), is the simple operation
of unification enough, or does an adequate account of the facts of
natural language require the ability to state relational constraints?
Grammatical models, when considered in all their detailed glory, are
complex enough that it is not possible to reliably follow all of the
implications of any proposed change in one's head or with pen and
paper. The initial development of infrastructure to interpret (and
parse with) grammars in any particular formalism requires an up-front
investment of time. There is also time-consuming work involved in
implementing theoretical ideas in order to test them. However, the
benefits of both of these investments are immense. They allow us to
test our ideas both for consistency with the rest of the grammatical
system and against a wider range of data than is possible without
computer assistance: The current fastest HPSG parser, `cheap'
(developed within the PET platform of Callmeier 2000), can process a
testsuite of 1000 sentences in a matter of minutes. Using the
regression testing facilities of [incr tsdb()] (Oepen 2001), it is possible
to compare the behavior of the current state of the grammar with
earlier test runs, and look for sentences for which there are changes
in predicted grammaticality, number of parses, structure of parses, etc.
Furthermore, this kind of work is not restricted to monolingual
investigation. As shown by the LFG ParGram (Butt et al 2002, King et
al in press) and HPSG Grammar Matrix (Bender et al 2002) projects, it
is possible to explore issues of universals and variation across
languages in such a way that the proposed ideas can be tested by
using the grammars to parse testsuites (or corpora) of the languages
studied.
I do not believe that all syntactic research should take place in the
context of computational implementation. The implemented systems
discussed above have benefitted greatly from theoretical work as well
as contributing to it. At the same time, the potential benefits of
computational work for theoretical inquiry should not be eschewed.
References:
Many of the resources mentioned above are available online
at: http://www.delph-in.net
Bender, Emily M., Dan Flickinger and Stephan Oepen. 2002. The
Grammar Matrix: An Open-Source Starter-Kit for the Rapid
Development of Cross-Linguistically Consistent Broad-Coverage
Precision Grammars. In Carroll, John and Oostdijk, Nelleke and
Sutcliffe, Richard (eds), Proceedings of the Workshop on Grammar
Engineering and Evaluation at the 19th International Conference on
Computational Linguistics. Taipei, Taiwan. pp. 8-14.
Butt, Miriam, Helge Dyvik, Tracy Holloway King, H. Masuichi, and
Christian Rohrer. 2002. The Parallel Grammar Project. In Carroll,
John and Oostdijk, Nelleke and Sutcliffe, Richard (eds), Proceedings
of the Workshop on Grammar Engineering and Evaluation at the 19th
International Conference on Computational Linguistics. Taipei,
Taiwan. pp. 1-7.
Callmeier, Ulrich. 2000. PET --- A Platform for Experimentation with
Efficient HPSG Processing Techniques. Natural Language
Engineering 6 (1), Special Issue on Efficient Processing with HPSG.
pp.99--108.
King, Tracy Holloway, Martin Forst, Jonas Kuhn, and Miriam Butt. In
press. The Feature Space in Parallel Grammar Writing. Journal of
Research on Language and Computation, Special Issue on Shared
Representations in Multilingual Grammar Engineering.
Oepen, Stephan. 2001. [incr tsdb()] -- Competence and Performance
Laboratory. User Manual. Technical Report. Computational
Linguistics, Saarland University, Saarbruecken, Germany.
Emily M. Bender
Department of Linguistics
University of Washington
Linguistic Field(s): Computational Linguistics
Discipline of Linguistics
Syntax
-------------------------Message 2 ----------------------------------
Date: Mon, 09 May 2005 01:38:42
From: Martha McGinnis < mcginnis at ucalgary.ca >
Subject: Re: A Challenge to the Minimalist Community
Let me elaborate slightly on my response to the Sproat/Lappin
challenge.
Though a ''smart car'' may be a better analogue to Minimalism,
Carson Schütze's Corvette/SUV analogy makes a crucial point: P&P is
designed to meet certain goals, so to challenge it to meet additional
goals is to ask, not for an achievement equivalent to that of statistical
parsing, but for something much better. This is one issue underlying
the suggestion to approach the challenge by means of constructive
collaboration. Minimalist syntacticians can't reasonably be expected
to leap at the chance to pour resources into a computational
challenge that even computational linguists have been unable to meet.
A second issue is what determines the course of intellectual inquiry.
Each of us rightly prefers to pursue the problems that seem both (a)
interesting to us, and (b) within our capacity to solve. For each of us,
there are many solvable questions that appear uninteresting, and
many interesting problems that appear beyond our capacity to solve.
Given that Sproat and Lappin have formulated their challenge as a
goal for others to meet, it would be easy to infer that they do not find it
sufficiently interesting or solvable to undertake themselves. If so, no
one else can reasonably be criticized for feeling the same way.
My sense is that the problem they pose *is* potentially interesting and
almost certainly solvable. Barring a lucky breakthrough, however, the
problem seems most likely to be solved via extensive collaboration
among computational linguists, psycholinguists, and syntacticians. If
Sproat and Lappin are indeed convinced that the problem is both
solvable and worth solving, I invite them to lead a collaborative project
to solve it.
Cheers,
Martha
Linguistic Field(s): Computational Linguistics
Discipline of Linguistics
Syntax
-------------------------Message 3 ----------------------------------
Date: Mon, 09 May 2005 01:38:45
From: Paul Kiparsky < kiparsky at csli.stanford.edu >
Subject: Re: A Challenge to the Minimalist Community
In response to Asudeh, Goldsmith remarks that parsers might not need
to be able to distinguish grammatical from ungrammatical sentences.
> That's not quite right. There is not universal agreement to the position
> that the ability to distinguish grammatical from ungrammatical
> sentences is an important function to be able to model directly,
> whether we are looking at humans or at software. There are certainly
> various serious parsing systems whose goal is to be able to parse, as
> best they can, any linguistic material that is given to them -- and
> arguably, that is what we speakers do too. I think of Microsoft
> Research's NLPWin parser as an example of such a system.
But isn't detection of ungrammaticality necessary for correct
disambiguation? For example, if a parser can't recognize the
ungrammaticality of Chomsky's (2), how can it recognize that
(3) has just one reading, the one corresponding to (1)?
(1) Which violins are these sonatas easy to play on?
(2) *Which sonatas are these violins easy to play on?
(3) What are they easy to play on?
Linguistic Field(s): Computational Linguistics
Discipline of Linguistics
Syntax
-------------------------Message 4 ----------------------------------
Date: Mon, 09 May 2005 01:38:49
From: Seth Cable < scable at mit.edu >
Subject: Re: A Challenge to the Minimalist Community
Although they have worked to clarify their position on the matter, it still
puzzles me what Drs. Sproat and Lappin intend to learn from "the
community's" success or failure at their proposed challenge. There is
the strong suggestion from their words in passages such as the
following that they see the challenge as some sort of 'crucial
experiment' weighing on the whole edifice of related P&P proposals.
> It seems to us that if the claims on behalf of P&P approaches are to
> be taken seriously, it is an obvious requirement that someone
> provide a computational learner that incorporates P&P mechanisms,
> and uses it to demonstrate learning of the grammar of a natural
> language.
>
> Surely it is long past time to ask for some substantive evidence in
> the way of a robust grammar induction system that this view of
> grammar induction is computationally viable. Most other major
> theoretical models of grammar have succeeded in yielding robust
> parsers in far less time, despite the fact that they do not, as far we
> know, make analogous claims about the nature of language learning.
Suppose that we learn (somehow) that in its present form P&P is
thoroughly, inescapably computationally 'unviable', and that a learner
of the sort Drs. Sproat and Lappin imagine cannot be built. This
would certainly stand as a consideration against the theory in its
present form. But, it would simply be one consideration within an
expansive, turbulent sea of supporting and conflicting data. There
exist, after all, a great many phenomena which stymie the P&P model -
tough-movement being one well-known case. On the other hand,
there are other domains of fact that P&P handles rather superbly, in
ways superior to 'competing' proposals. The Root-Infinitive Stage in
language development, for example, has been most productively
studied and analyzed by researchers assuming some version of P&P.
At the risk of sounding banal, there are advantages and
disadvantages to P&P as there are to *any* model at this very early
stage of our understanding (and it *is* early yet to seek out theories
which approach anything near a widely-encompassing model of the
language faculty; no one even knows how relative clauses work). We
do ourselves a great disfavor by setting up 'litmus tests' for proposals.
One is not being 'unscientific' in pursuing HPSG, LFG or statistical
models despite their being unable to illuminate the Root Infinitive
Stage, nor is one being 'dogmatic' by adopting the TAG system of
Frank (2002) despite its being unable to derive sentences in which wh-
movement interleaves with raising (Frank 2002, chapter 6). The most
reasonable goal for researchers interested in questions regarding the
language faculty is to assess the advantages and disadvantages of
each proposal, always with an eye towards incorporating and
combining the insights of each. In this regard, posing ''challenges'' to
one another is unproductive and a little absurd. If one wishes to
examine how ideas developed to cover one domain of fact could apply
to another, the standard operating procedure is for one to take up the
mantle themselves.
In this light, consider the following statement by Drs. Sproat and
Lappin.
> Finally since our challenge has actually stimulated relatively little
> discussion from the P&P community, we suspect the following may
> also be one response:
>
> 8. Ignore the challenge because it's irrelevant to the theory and
> therefore not interesting.
>
> RESPONSE: This is the ''answer'' we had most anticipated. It does
> not bode well for a field when serious scientific issues are dismissed
> and dealt with through silence.
I am one of many individuals who did not earlier contribute to this
discussion. Was it because I found this challenge "irrelevant to the
theory"? Of course not; success or failure at the task could stand as
consideration for or against certain ideas in the P&P literature.
Indeed, it might be that success at the task Drs. Sproat and Lappin
imagine would be an interesting feat for a P&P learning algorithm
(though, surely, not any more interesting than the discovery of some
subtle prediction regarding language acquisition). I maintain,
however, that this challenge is not the Great Race Around the World
they seem to imagine it, but only one part of a very long, slow and
plodding discussion that we should not seek to bring prematurely to its
conclusion.
Arguing in Defense of Disinterest,
Seth Cable.
Linguistic Field(s): Computational Linguistics
Discipline of Linguistics
Syntax
-----------------------------------------------------------
LINGUIST List: Vol-16-1454
More information about the LINGUIST
mailing list