dlevere at ilstu.edu
Wed Dec 23 21:04:08 UTC 2009
Hauser, Chomsky, and Fitch make a big deal of long-distance dependencies:
""At the lowest level of the hierarchy are rule systems that are limited to local dependencies, a subcategory of so-called “finite-state grammars.” Despite their attractive simplicity, such rule systems are inadequate to capture any human language. Natural languages go beyond purely local structure by including a capacity for recursive embedding of phrases within phrases, which can lead to statistical regularities that are separated by an arbitrary number of words or phrases long-distance, hierarchical relationships are found in all natural languages for which, at a minimum, a “phrase-structure grammar” is necessary. It is a foundational observation of modern generative linguistics that, to capture a natural language, a grammar must include such capabilities." HC&F (2002:1577)
I reply to this in the paper just put up on my webpage, mentioned earlier today, as follows:
"Whether humans choose a finite vs. phrase structure grammar is precisely the empirical point that Pirahã raises. The 'infinity' of the Pirahã language, for example, might lie outside the grammar in the Chomskyan sense - in discourse - via the ability to fashion stories out of sentences rather than sentences out of phrases. There could, in other words, be a longest sentence in Pirahã, yet not a longest story. If that were the case, then NP&R would be wrong, since Merge applies only to form sentences and phrases from lexical items. And HC&F would be misguided by failing to relate the general property of recursion to stories in lieu of or in addition to recursion in sentences. Theories that do not have anything to say about facts external to sentences (e.g. all versions of Chomskyan theory) cannot appeal to discourse, thought, etc. for support for their theory of grammar, e.g. the role that recursion plays in the FLN. To beat this horse another way, recursion could be responsible for the infinitude of natural languages in a way unanticipated by Chomskyan theory, by allowing infinity to be a property of discourses, rather than sentences."
The kinds of examples that are standardly adduced for long-distance dependencies include:
(1) a. 'Who do you think John believes __ (that Bill saw__)?'
b. 'Ann, I think he told me he tried to like ___'
Piraha does not have structures like this. However, Piraha does have gaps. There are both places where pronouns are 'understood' (what generative theory calls/ed 'empty categories') and there is some displacement of constituents (all described by me in various places). But the gaps are not like those in (1). There are structures like:
(2) 'Dan did not used to speak Piraha. His children did not know either. Neveretheless __ knows well now.' , where the gap refers back to 'Dan'.
The latter example is a long-distance dependency involving a gap, but it is not a fact about sentential structure, but discourse structure. Generative theory itself recognizes the difference, e.g. work by Huang in the mid 80s on the difference between pro-drop and missing objects in languages like Brazilian Portuguese:
(3) __ Coloca __ ai. '(You) put (it) there.'
In (3) the subject gap behaves like a regular pronoun but the object gap like a discourse variable, according to some analyses.
Whether or not long-distance dependencies are different kinds of things within and without the sentence depends on the functional strategies for questions, topics, etc. exercised by a particular language. But there aren't studies comparing and contrasting the different notions that I know of (which probably only shows my ignorance).
For functionalists, the Piraha data might be rare, but it shouldn't be a shock, because - if I am right - these data simply suggest that the 'infinitude' produced by the computational system of Piraha is found in discourses, not in sentences.
On 23 Dec 2009, at 15:27, Brian MacWhinney wrote:
> Dear FunkNetters,
> I am puzzled that no one has questioned the merger of long-distance relations into a single type or a single process, as advocated by Givon and Everett. I tend to think of pronominal anaphora as the prototypical case of a long-distance discourse process, although devices such as class inclusion ("the bus" referring anaphorically or even cataphorically to "the vehicle) are relevant too. For sentence-internal long-distance processes, I assume that we are thinking about things like the placement of wh-words at the beginning of English sentences. If we look at just these two cases, do we really want to say that the two domains/processes are cognitively equivalent? In the discourse case, the reference is established in mental space and processing involves re-invoking that referent. In the within-sentence case for English wh placement, processing is radically different. The referent is unknown. In fact there is no problem or issue with referent identification. Instead, we are typically trying to fill verb argument structures that have been morphed around, possibly to serve the interests of focal marking (as opposed to methods such as wh in situ or sentence final question markers etc.). I see good reasons to link up the discourse anaphoric processes to relativization and complementation, but not to things like wh placement. If we focus on relativization, is the non-repetition of the head what makes it long-distance? But the distance in that case is not the same as for wh, right? Perhaps we don't want to talk about long-distance, but just about gaps. But then does the existence of resumptive pronouns in the relative clause mean that the language is no longer "long distance" for that construction?
> Basically, I am not sure that the notion of "long distance" in itself provides any leverage and I am curious whether the real issue with Pirahãn type languages is the non-presence of gaps (whatever those are).
> -- Puzzled in Pittsburgh (Brian MacWhinney)
More information about the Funknet