grammaticality judgements

Spike Gildea spikeg at OWLNET.RICE.EDU
Wed Mar 8 16:00:21 UTC 2000


Two quick points:

(1) I have received a number of requests for the reference to Labov (1975).
It is essentially an expansion of several ideas in Chapter 8 in his 1972
textbook on sociolinguistics:

Labov, William.  1975.  Empirical foundations of linguistic theory.  The
scope of American linguistics, ed. by Robert Austerlitz: 77-133. Lisse: The
Peter de Ridder Press.

(2) I agree with Debra Ziegeler's comments on the usefulness of *reliable*
grammaticality judgements.  The very process of testing grammaticality
judgements on a representative sample of speakers increases the probability
that reliable judgements will be filtered out from spurious judgements.  My
original concern was that many fieldworkers -- who do not themselves speak
the language they are trying to describe or theorize about -- might rely
excessively on a single speaker for all their judgements, and might not
therefore discover which judgements represent real speech patterns in some
community and which represent artifacts of a forced judgement task in the
elicitation context.  As sentences being tested become increasingly
abstract and unlikely ever to be uttered, my concern is that the latter
becomes more common.  Of course, even artifacts of forced judgement tasks
can be replicable, or if there are only two outcomes (yes/no), can be taken
to represent a dialectal variation, at least increasing the number of
speakers involved is an important improvement on the model given in most
field methods classes, where you work with a single speaker's intuitions (a
model which is explicitly endorsed by folks who just want to describe "my
own dialect/the dialect of my informant", and who thereby sidestep the
issue of whether variation might represent real dialect distinctions or
just spurious responses to the grammaticality judgement task).

Of course, I have never tried to quantify this effect.  I have the
anecdotal evidence based on the experience that everyone in field
lingusitics must have had, where a native speaker changes his/her mind
(sometimes multiple times) about whether something could in fact be said,
or what it would mean if it could be said.  I also have the anecdotal
evidence that on more than one occasion, after I developed a hypothesis
based on examples from a single, relatively sophisticated speaker (read:
experienced linguistic informant), I was unable to get consistent agreement
on critical examples in back-translation questionnaires that I ran by other
speakers from the same communities.  This latter step has since become a
core aspect of my field methodology, as I no longer trust examples that
have not been produced or back-translated without hesitation by at least a
half-dozen speakers.

It might be interesting to compose actual quantifiable experiments to test
these anecdotal claims: Are there types of sentences that yield less
reliable judgements?  Are there types of elicitation that yield less
reliable databases of utterances?

Participant observer "enriched" elicitation (also suggested in Labov 1975
as a way to add statistically rare examples to a corpus of recorded
'natural' speech) is nice if you are fluent enough to avoid foreigner talk
data, where speakers adjust their language to accommodate your perceived
weakness as an interlocutor.  I never have been.  I have succeeded a couple
of times in getting such examples by working with a monolingual speaker and
a bilingual speaker at the same time, feeding the bilingual speaker general
questions that ought to lead towards certain types of constructions and
then recording their entire interactions.  I have also recorded the
interactions when presenting sentences to a group for grammaticality
judgements, or for fine-grained semantic distinctions.  While the resulting
answers are interesting, the recorded texts are still more interesting as a
genre of speech that is very rich in the language of conflict and
resolution. But such texts take a hell of a long time to transcribe and
gloss...

Spike



More information about the Funknet mailing list