21.2803, Review: Psycholinguistics; Syntax: Blevins & Blevins (2009)

linguist at LINGUISTLIST.ORG linguist at LINGUISTLIST.ORG
Sun Jul 4 12:38:07 UTC 2010


LINGUIST List: Vol-21-2803. Sun Jul 04 2010. ISSN: 1068 - 4875.

Subject: 21.2803, Review: Psycholinguistics; Syntax: Blevins & Blevins (2009)

Moderators: Anthony Aristar, Eastern Michigan U <aristar at linguistlist.org>
            Helen Aristar-Dry, Eastern Michigan U <hdry at linguistlist.org>
 
Reviews: Monica Macaulay, U of Wisconsin-Madison  
Eric Raimy, U of Wisconsin-Madison  
Joseph Salmons, U of Wisconsin-Madison  
Anja Wanner, U of Wisconsin-Madison  
       <reviews at linguistlist.org> 

Homepage: http://linguistlist.org/

The LINGUIST List is funded by Eastern Michigan University, 
and donations from subscribers and publishers.

Editor for this issue: Joseph Salmons <jsalmons at linguistlist.org>
================================================================  

This LINGUIST List issue is a review of a book published by one of our
supporting publishers, commissioned by our book review editorial staff. We
welcome discussion of this book review on the list, and particularly invite
the author(s) or editor(s) of this book to join in. If you are interested in 
reviewing a book for LINGUIST, look for the most recent posting with the subject 
"Reviews: AVAILABLE FOR REVIEW", and follow the instructions at the top of the 
message. You can also contact the book review staff directly.

===========================Directory==============================  

1)
Date: 04-Jul-2010
From: Michael Maxwell < maxwell at umiacs.umd.edu >
Subject: Analogy in Grammar
 

	
-------------------------Message 1 ---------------------------------- 
Date: Sun, 04 Jul 2010 08:29:37
From: Michael Maxwell [maxwell at umiacs.umd.edu]
Subject: Analogy in Grammar

E-mail this message to a friend:
http://linguistlist.org/issues/emailmessage/verification.cfm?iss=21-2803.html&submissionid=2640134&topicid=9&msgnumber=1
 
Discuss this message: 
http://linguistlist.org/pubs/reviews/get-review.cfm?subid=2640134

 

Announced at http://linguistlist.org/issues/20/20-2666.html 

EDITORS: James P. Blevins and Juliette Blevins
TITLE: Analogy in Grammar
SUBTITLE: Form and Acquisition
PUBLISHER: Oxford University Press
YEAR: 2009

Mike Maxwell, Center for Advanced Study of Language, University of Maryland

DESCRIPTION

The chapters in this book, which were edited subsequent to being presented at a
workshop in 2006, display a wide range of approaches to the place of analogy in
linguistics. At one extreme, the papers in part I (''Typology and Complexity'')
barely touch on analogy. (One paper, Finkel and Stump, does not mention it,
despite an index entry for ''analogy, brief history of''.)   Most papers in part
II (''Learning'') and III (''Modeling Analogy'') are deeply committed to some form
of analogy as being fundamental to human languages and to humans who learn and
speak those languages -- although what they mean by ''analogy'', and how it
figures into language (as a replacement for a rule-based analysis, or as a means
to finding a rule-based analysis) varies considerably, as I will discuss in my
evaluation. The papers in section III apply computational techniques to model
speaker intuitions. Goldsmith's paper (in section II) is also a computational
model, although not explicitly of speaker intuitions. One feature that unites
all the papers is a focus on morphology and phonology; syntax is mentioned in
the introduction (in a way which will not make generative linguists happy), but
comes up only in passing elsewhere. In fact most of the papers touch in some way
on the issue of inflection (declension and paradigm) classes.

The editors introduce the volume by discussing the general notion of analogy,
and briefly summarizing its history in linguistics. I now turn to the individual
papers.

In ''Principal parts and degrees of paradigmatic transparency'', Finkel and Stump
(henceforth F&S) argue that the ''No-Blur Principle'' (Cameron-Faulkner and
Carstairs-McCarthy 2000; see also Carstairs-McCarthy 1994) is disproved by data
from the Comaltepec Chinantec and Fur languages. 

The No-Blur Principle states (roughly) that in any cell of a paradigm, at most
one of the possible affixes can fail to uniquely identify the declension or
conjugation class. Put differently, the No-Blur Principle holds that paradigm
membership tends to be transparent: most inflectional affixes either make it
possible to tell which paradigm class a word inflected with that affix belongs
to, or represent the default affix for a particular cell. Under their analysis,
F&S show that this principle holds for only a few of the conjugation classes of
Comaltepec Chinantec; indeed, the paradigms are almost maximally non-transparent.

Unfortunately, the Chinantec data is mangled (for the original data, see Pace
1990). In particular, the description of the paradigm tables in the text says
that a prime (similar to an apostrophe) is used to represent ballistic stress (a
phonetic characteristic nearly unique to Chinantec languages), and the IPA
glottal symbol is used to represent syllable-final glottal stop (according with
Pace's usage.) However, in several conjugation classes -- but not all -- the
glottal symbol in fact marks ballistic stress. Since this is not consistent (for
the most part, a sequence of glottal symbol + prime actually represents a
glottal plus ballistic stress, while a glottal symbol by itself represents
ballistic stress), the tables are almost impossible to interpret. 

The analysis is based on these erroneous forms, and correcting the errors in
table 2.7 (Pace's Class A conjugations) reveals an important pattern: the real
glottal stop (as opposed to the ballistic stress written as a glottal) is found
throughout conjugation classes P3, P4, and P13, but nowhere else. The glottal is
part of the root, not the suffix, as Pace makes clear: conjugations P1, P2, P12,
and P16 are for ''roots which are not checked by glottal'', whereas conjugations
P3, P4, and P13 are for ''roots which are checked'', 1990: 44-5, 49. Similar facts
hold for Class C (table 2.9); but the Class B verbs of table 2.8 are all
non-glottal, hence this error affects them only in that all instances of a
glottal stop symbol should be replaced by the prime. That the glottal belongs to
the root means that for purposes of evaluating the No-Blur Principle, the
glottal and non-glottal conjugations can be collapsed based on their
phonologically-based complementary distribution, which would greatly reduce the
number of paradigm classes, reducing violations of the No-Blur Principle.

The case against the No-Blur Principle is thus not as strong as it might at
first seem. It is beyond the scope of this review to re-analyze the paradigms of
Comaltepec Chinantec; rather, I will suggest alternatives to certain points
which might further reduce the number of paradigm classes, making the overall
analysis more transparent.

By far the most variant portion of the paradigms are the second person
completive forms. For Class A verbs, for example, Pace postulates four basic
paradigm classes (which reduce to two given the glottal vs. non-glottal root
distinction); but these expand into 27 sub-classes when the second person
completive is taken into account. (Actually, a few more classes and sub-classes
arise for other reasons.)  One might therefore explore the possibility that the
second person completive is in fact the underlying form of the root, and that
the other person/ number/ aspect affixes override the tone and stress of the
root. This is essentially what Pace does by making this the (primary) citation
form. Such an analysis would of course be more compelling if something resulted
other than saving the No-Blur Principle.

F&S make further class distinctions based on conjugation classes for which
certain forms cause tone sandhi of a following syllable. It is not clear how
this should count for the No-Blur Principle: are the conjugations with forms
that cause tone sandhi distinct from those conjugations whose corresponding
forms do not cause sandhi? An answer of ''no'' would eliminate several sub-classes
(in particular, some sub-classes which would not be eliminated by treating the
second person completive as the underlying form of the root).

The authors also present data from Fur, a Nilo-Saharan language of Sudan, which
would also under their analysis constitute a counterexample to the No-Blur
Principle. A few facts suggest that at least some variability may be
phonological (I am indebted to Constance Kutsch Lojenga and Christine Waag for
their insights on this, as well as Andrew Carstairs-McCarthy). 

The verb paradigm of Fur breaks into two parts, based on the subject agreement:
what I will call Part I, consisting of first and second person subjects, and
third person plural [+human] subjects; and Part II, consisting of third person
singular [+human] and third person [-human] subjects.

It is generally agreed that Fur has two phonemic tones, High and Low (although
some posit a phonemically distinct Mid tone). A short vowel can bear a sequence
of two tones, while a long vowel can carry three. The tone sequences which
appear in F&S's table 2.30 consist mostly of two tones, combinations of H and L;
but a few cells contain the sequence written HF or LF. These sequences ending in
'F' actually represent a sequence of three tones, with the F being an HL
sequence. In every case where the second of the two tones is written F, the
tense/aspect suffix is null, and conversely every instance of a null
tense/aspect suffix corresponds to a second tone of F. So most likely the F
represents a stem-final H tone, followed by a tense/aspect suffix whose only
marking is an L tone. The suffixal L tone docks to the left on a mora already
bearing an H tone, resulting in a phonetically falling tone.

By ignoring this suffixal L tone, the stem-final tone for all verbs for Part I
becomes H. For Part II, the verbs of F&S's conjugation classes I,2 and II,2 have
a stem-final tone of L; all other verbs have a stem-final tone of H in Part II.
It seems plausible that this stem-final tone exhibited with Part II subjects is
the stem's lexical tone. 

For stem-initial tone, things are less clear, but a first cut on the problem
requires referring to the verb prefixes, not shown in F&S's tables. In Part I of
the paradigm, these have the form Ca- before consonant-initial verbs, and C-
before vowel-initial stems. For Part II, the prefixes appear to be null. Now for
all verbs except classes I,1 and II,1, the stem-initial tone is H for Part 1 and
L for Part 2; this is reversed for classes I,1 and II,1. The stem-initial tone
can therefore be accounted for if Part 1 prefixes bear an H tone for conjugation
classes I,2, II,2, III and IV, but an L tone for classes I,1 and II,1; the Part
2 prefixes bear the opposite tones, and therefore consist of tone only, without
segmental material. The patterning of these opposite tones suggests the notion
of polar tone, i.e. a prefixal tone which is simply the opposite of the
following tone -- a concept which has been postulated for other Nilo-Saharan
languages (cf. Yip 2002). If such a polar tone analysis can be made to work, it
would approximately halve the number of conjugation classes.

It appears that F&S's analysis is not the only possibility. Given that the
analysis of Fur is still on-going, and that there are debated points (such as
underlying tones on verbs), a theoretical claim dependent on Fur is on shaky
grounds.

To summarize, F&S claim that in some languages, the number of forms of a given
lexeme which must be known in order to assign that lexeme to an inflection class
is more than one, and that the forms needed may vary from one inflection class
to another; in other words, the ''No Blur Principle'' is falsified, based on data
from Comaltepec Chinantec and Fur. But in one case the data is badly mangled,
and in the other the analysis is still unclear (and the language is the subject
of on-going documentation). F&S may be right, but their claims cannot be
established on the data they provide. Fortunately, while their article provides
a backdrop for the rest of the book, the other articles do not crucially rely on
F&S's claims.

Ackerman, (James) Blevins, and Malouf (ABM) investigate a related problem, the
''Paradigm Cell Filling Problem'' (PCFP): how does the speaker create (or
understand) inflected forms of words he has never heard before, in particular in
a language with multiple inflection classes or other forms of allomorphy in its
paradigms? This is related to F&S's question.  However, Carstairs-McCarthy's
No-Blur principle is explicitly about affixal morphology, not about
stem-internal changes which may also happen in a paradigm. Often such
stem-internal changes are caused in some sense by the affixes. In Spanish, for
example, in a well known process verb stem vowels are diphthongized when, due to
affixation, the stress falls on that vowel. The process is lexically governed;
some verbs undergo it, others do not. In other languages, a stem-internal
process may be the sole exponent of some morphosyntactic features. In Pashto,
for example, in a subset of masculine nouns the only indication of the oblique
case singular is a change to the final stem vowel. 

Carstairs-McCarthy considers all such stem-internal changes to be outside the
scope of his No-Blur principle. ABM, in contrast, are concerned with all changes
in a paradigm, whether affixal or stem-internal. ABM use the term ''declension
class'' in this sense (at least for nouns), and this is different from
Carstairs-McCarthy's usage of this term; here I will refer to a class of words
that undergo the same affixal and/or stem-internal changes (abstracting away
from phonologically predictable changes) as a morphological class.

ABM begin from the assumption that a language has a ''set of exemplary
paradigms.'' (Whether this means that each morphological class has as its
exemplary paradigm the completely filled-in paradigm of a single lexeme, or a
set of paradigms of multiple lexemes, is not clear.) Languages also have ''sets
of diagnostic principal parts''; to decide how to inflect a new word, the learner
matches one or more of the new word's principal parts against those of the
exemplary paradigms, and produces the new form by analogy to the chosen
exemplary paradigm. The question ABM address is how a (first) language learner
who has seen a subset of the forms of a given lexeme can assign the lexeme to
one of the morphological classes (i.e. to an exemplary paradigm); and how, given
that assignment, the learner infers the remaining forms. 

Their approach builds on the notion of conditional entropy for a cell of a
paradigm, which is inversely related to the amount of information about a
complete paradigm of a lexeme which is contributed by knowing the value of that
cell for some.

They illustrate their approach with data from Northern Saami, Finnish, and
Tundra Nenets, and conclude that certain subsets of the cells of a paradigm
constitute sub-paradigms, or ''alliances'', within which one form is predictable
from the other, whereas the forms of one sub-paradigm may not be predictable
from forms in other sub-paradigms. Such alliances allow learners to predict
rarer forms of a paradigm from more common forms. ABM conclude that paradigms
and their sub-structure (including alliances), are more than the epiphenomena
that they are sometimes assumed to be.

Andrew Wedel's paper explores from an evolutionary standpoint how morphological
patterns, as reflected in inflectional classes, might change over time in the
often opposing directions of leveling and extension; his approach is validated
through computational simulation. One can of course question whether the
necessary simplifications in such simulations invalidate the results, but
simulations seem to me invaluable in showing which factors are essential and
which are less relevant. Absent from Wedel's simulation -- in fact, unneeded --
is any teleological force, such as grammar simplification; the results follow
instead from competition between patterns, with more patterns leading to more
potential sources of error, and thereby change. (The analogy to biological
evolution, where teleology also has no place, should be obvious.)

Gerken, Wilson, Gomez and Nurmsoo (GWGN) approach the problem from the other
side, namely psychology and studies of human analogy making. They point out that
humans are often unable to complete morphological paradigms of unfamiliar
languages unless they are supplied with some additional information on category
membership. They suggest from this that at least for language learning, humans
require more than phonological similarity when deriving analogies, and that the
likelihood of drawing an analogy in paradigms between a new word and a known
word is increased if the known word belongs to a large set of words belonging to
the same paradigm. That is, a new word is assigned to a paradigm in a compromise
between two kinds of analogy: one in which the paradigm choice is based on a
single known word that the new word most closely resembles (which they call the
''proximity effect''), and the other in which the choice is based on a set of
words, at least one of which the new word closely resembles (the ''gang effect'').
This important distinction is too often overlooked in work on analogy. 

GWGN also underline, as a topic for future research, the potential importance of
syntagmatic structure (''distributional cues'') in deciding among competing
paradigm classes -- a point which both syntacticians and corpus linguists will
appreciate. However, they claim that ''The categories formed by such an approach
cannot be easily linked to labels such as 'noun', 'verb', etc. Rather, they are
simply groups of words that share similar properties.'' They conclude that this
approach to determining category membership is likely to conflict with the
notion that knowledge of such linguistic categories is innate. There are really
two questions here: (1) are the syntactic categories that linguists postulate
useful in determining the inflection class membership by analogy? and (2) do
unsupervised learning techniques applied to corpora as input arrive at the same
set of categories that linguists postulate? If the answer to (2) is yes, it
might render the innate knowledge superfluous (although the linguists'
categories themselves might still be appropriate, i.e. the answer to (1) might
still be yes). 

Krott contributes a study of interfixes in Dutch and German compounds.
Interfixes are apparently meaningless affixes which appear between the stems of
compounds in some languages. Dutch and German have a variety of interfixes, and
which one is used for any given compound is difficult to predict. In
computational simulations of human behavior, analogy fares better at prediction
than rule-based accounts. The best choice of interfix seems to be based on an
interaction among the compound's head, its modifier, and other factors; which
factors predominate seems to be a language-specific question (and surprisingly,
differs by age group tested). 

One might question whether it is right to lump multiple human decisions on
interfix choice together for comparison with the simulation, on the assumption
that the right thing to model is the grammar of an individual speaker, and that
modeling group behavior might drown any signal. It is also difficult to evaluate
the claim that rule-based approaches don't work without knowing what constitutes
a rule, and how conflicting rules are treated under a given analysis. Krott
gives an example of a rule which predicts the wrong interfix in Dutch, but it is
possible that the rule-based system as a whole might still give the right answer
if other rules bleed the application of this rule in certain cases. And of
course an Optimality Theory approach might also give the right answers without
being either rule-based or analogy-based.

Most work on interfixes has been on Indo-European languages, primarily Dutch,
German, Greek and Polish, leaving obvious gaps. (The term ''interfix'' has been
used in Bantu linguistics, but it appears to have a different meaning there,
more like an infix; cf. Hyman 2002; I thank Michael Marlo for pointing this
out). Krott brings up the Japanese process ''rendaku,'' which behaves in
interesting ways like interfixes. While rendaku has usually been analyzed as
phonological, one might instead analyze it as a process morpheme (similar to the
Mixe verbal affix marking the third person subject by palatalizing the
verb-initial consonant, see Dieterman 2008). If true, this would certainly be an
interesting development.

Goldsmith's approach is quite different from most of the others in this
collection. First, he is far more explicit about how analogy and rules fit in
than most of the others; he is driven to such explicitness by the fact that he
implements his approach computationally (as a model of a corpus plus grammar,
rather than as a model of speaker intuitions). Second, his concept of analogy
owes more to the notion of rules than do most other contributions; as he puts it
(p. 150), ''Analogy ... is an excellent and important source of hypotheses, but it
is not more than that.'' Indeed, one could question whether the outcome of
running his implementation on a data set constitutes analogy at all. (The
outcome is a finite state automaton; not a transducer, because there is, at
least as presented here, no notion of underlying forms.) And third, the fact
that his approach is implemented means that he must deal with cases where the
implementation fails (and there are such cases, particularly with agglutinating
languages).

Despite differences between Goldsmith's approach and what might be considered
''real'' analogy, analogies are at the center of his algorithm; specifically, how
does the system find candidate analogies, and how should it evaluate whether a
given analogy is good or bad? The ''how to find'' question is touched on lightly,
while the evaluation method is explained in more detail. Conceptually,
evaluation is a matter of deciding whether adding a particular analogy (or
perhaps better, a rule) makes the overall analysis simpler. For example, adding
a rule that a large class of words in English can take the -ed suffix simplifies
the overall description of the language, because it eliminates the need to list
all those words in both their suffixed and unsuffixed forms -- at the relatively
small cost of adding the -ed suffix to a lexicon, creating a rule to concatenate
that suffix, and creating a subset of words to which this rule applies (this
subset of course gets re-used with other suffixes). Adding a rule that would
analyze words like 'charge' and 'change' into pseudo-morphemes 'cha', 'n', 'r',
and 'ge', on the other hand, would probably be rejected on the grounds that the
additional rules and morphemes would cost more than would be saved. Importantly,
these costs can be quantified. In short, the approach uses analogy as a
heuristic and cost metric to evaluate analogies; but what comes out in the end
is -- arguably -- rules.

Skousen has also implemented his Analogical Modeling (AM) approach, explicitly
modeling the ''gang effect'' (see GWGN above); that is, analogy is between an
unknown case and a set of known cases, and the larger that set, the more likely
it is to serve as the basis of an analogy. However, other details are less
explicit; one is too frequently referred to his other papers.

It does come out that AM is (or at least can be) computationally intensive; in
fact, Skousen says, it might require a quantum computer. Since it is unlikely
that the human brain constitutes a quantum computer, one expects some discussion
of heuristics (methods that get the right answer most of the time), or of
pathologically bad cases (which might not arise in the real world, or which
might lead to language change when they do arise). Instead, Skousen takes the
approach of limiting the number of variables to be considered (since both memory
and time are said to be exponential in the number of variables). He concludes
(p. 182) that ''we need a principled method of constructing variables so that the
empirically determined relative strength between classificatory types is
naturally achieved.'' This is reasonable, not to mention what linguists have been
doing for decades or longer -- for example, by defining limited ways in which
different linguistic modules can interact. 

Another issue of computational tractability is said to arise in the context of
defining a phonological context. Given a number of actual words (strings, in the
computational sense) which are observed to act as contexts, the question is how
a general context can be constructed. Such generalization is necessary if new
forms are to be compared to the entire set of words, rather than to individual
words in the set. Skousen suggests that the generalization takes the form of a
limited kind of finite state automaton, but says that constructing the automaton
will require amounts of memory and/or time (which, or both, isn't clear)
exponential in the number of characters. It is not clear to me where this
requirement comes from; certainly a less limited kind of finite state automaton
can be constructed (and minimized, i.e. generalized) in far less than
exponential memory or time, cf. Daciuk et al. 2000.

I was hoping that this paper would provide an explicit account of analogy, and
in particular that it would clarify how analogy-based and rule-based analyses
differ. But Skousen says ''One simplified way to look at AM is in terms of
traditional rules'', and in the end I did not come away with the understanding
that I had hoped for; nor am I clear how his system is implemented. 

Albright is another computational modeler; like Goldsmith, Albright provides
enough information about his model that one might be able to replicate his work,
although in both many details are doubtless not covered in these short papers. 

Albright begins with the important point (see Skousen above) that ''restrictions
on possible analogies should follow from intrinsic properties of the
architecture of the model, and not be stipulated post hoc'' (p. 185). The
specific restriction investigated concerns a distinction Albright calls
''structured similarity'' versus ''variegated similarity.'' Roughly, the issue is
over the size of the comparison sets from one of which the behavior of a
previously unseen word will be derived: Are the comparison sets limited to words
which are similar to the new word (under some measure of distance -- Albright
uses phonological similarity)? Or are they all the previously seen words (or at
least all the previously seen words of a given grammatical category)? 

The question of structured vs. variegated similarity is answered by comparing
the probabilistic results of two computational models with the statistical
results from human subjects. The answer turns out not to be as clear-cut as one
might hope, but Albright shows that the data favors the ''structured similarity''
model.

I have mentioned several times in this review that the demarcation -- if there
is one -- between analogy and rules is not clear, and different authors seem to
draw the line in different places. Albright is more explicit about this than
most, but his answer seems to straddle the line: he refers to his approach as ''a
rule-based model of analogy'' (p. 211), and ''attribut[es] analogy to a grammar of
rules'' (p. 212). Indeed his system uses what are clearly rules, albeit rules at
a lesser degree of generality than was typical in generative rule-based phonology.
The rules abstract what is common to the comparison sets on which they are
based, namely the commonality in the phonological shape of the words in the
comparison sets. If this is analogical behavior, then analogy looks very much
like rules.

The final paper, by Milin, Kuperman, Kostic and Baayen (MKKB), is perhaps the
most math-heavy paper in the collection. It is similar to the study by ABM,
using information theoretic measures of paradigm complexity (hence the math);
but MKKB evaluate the numbers against experimental data on reaction times, word
naming, etc., in previous studies. This paper thus falls squarely into the
domain of cognitive science, but breaks new ground in that it is directed
towards paradigmatic complexity, whereas most previous work on morphological
processing and lexical retrieval has focused on syntagmatic aspects (the
sequence of morphemes in a particular word). MKKB's model has 26 variables, but
they see this as ''only a first step towards quantifying the complexities of
inflectional processing.''

One conclusion is that ''Lexemes and their inflected variants are organized
hierarchically,'' specifically with a ''layer of lexemes grouped into
morphological families, and a lower level of inflected variants'' grouped by
lexeme. While this is no doubt true (and will not surprise most theoretical
linguists), the unasked question is whether this is the only form of
organization, or whether it might be only one of many cross-cutting ways in
which the lexicon is organized. For example, it might be the case that inflected
forms of adjectives are organized by gender, that is with adjectives of the same
gender linked across lexemes. What the term ''morphological families'' means
should also be investigated. For many languages (such as Latin), one
traditionally thinks of declension classes as being the organizing principle
behind morphological families, and the fact that most (but often not all) nouns
of one declension class are of the same gender is viewed as a secondary
property. While this makes linguistic sense, it would be interesting to see
whether humans can actually be shown to do that under experimental conditions,
or whether gender (which is certainly more relevant in the syntax) might be a
dominant factor in the organization of nouns -- or whether the mind uses both to
organize its lexicon.

EVALUATION

Analogy has been a notoriously slippery notion in linguistics. Chomsky famously
argued against prevailing views that analogy had no place in linguistics (e.g.
(1966), while a great many since have argued that Chomsky was wrong. An
unfortunate aspect of this book is that while nearly all the authors seem
committed to the idea that analogy is important -- in some cases to the
exclusion of rule-based analyses -- almost none is explicit about where on that
slippery slope they would draw the line separating analogy from rules. One is
left with the impression that for at least some authors, analogy is more a
slogan than a theoretical approach. Several side-step the question of the
difference between analogy and rules by saying that rules are a kind of analogy.
The editors make this point in their introduction when they suggest that ''a rule
can be understood as a highly general analogy'' (p. 10), a theme which deserves
to be developed further. 

One problem in distinguishing analogy from rules (assuming they are distinct) is
a lack of agreement on exactly what analogy means in linguistics. Consider verb
inflection. At one end of a spectrum from analogy to rules, pure analogy might
mean that we understand the conjugation of a particular verb A by matching it up
with one other verb B that we are more familiar with, based on the fact that A
and B are similar in some respects (such as phonological shape). Toward the
other end of the spectrum, we might understand the conjugation of a particular
verb A by analogy with a whole class of other verbs {B, C, D ... }; the size of such
classes, or rather how the number of classes is decided, is an important
question -- one which comes up frequently in the literature of unsupervised
machine learning, specifically in clustering. Only two authors, Adam Albright
and John Goldsmith, address this question with any clarity; perhaps not
surprisingly, both come from a generative background.

If we were to go further toward this other end of the spectrum, we might
abstract some essential features of the class of verbs {B, C, D ...}; perhaps they
all begin with a consonant, or they all end in a strident consonant. But if we
do that, then we have, I believe, stepped out of the realm of analogy and into
the realm of rule-based linguistics. 

One might also entertain a hybrid theory, in which a learner starts out at one
point along a continuum ranging from pure analogy to rules, and ends up at
another point. When we do field linguistics, most of us do exactly that, with
many points in between.  For example, one might notice at an intermediate stage
that the class of verbs {B, C, D ...} ends in one of the sounds /p t k b d g m n?/,
only later generalizing this extensionally defined class of sounds to the
intensionally defined class of ''consonant.'' (When this stratagem fails, we
resort to arbitrary inflection classes.) A variant might be that learners
construct rule-based morphological analyses, but assign newly learned words to
this or that inflection class based on analogy of form. The papers by Albright
and Goldsmith fall into the hybrid camp: for them, analogy is a stepping-stone,
or heuristic, on the way to a rule-based analysis.

In sum, there are many unanswered questions, but one important question will be
to define in a more formal way what is meant by ''analogy'' and by ''rule.'' If
there is a boundary between these, there is room for debate as to the roles
analogies and rules play; both may be relevant, but in different domains. If
there is no boundary -- in which case the debate is really about regions of a
spectrum -- one might still explore the relative roles of different regions of
that spectrum, for different tasks.  In either case, then, the question of roles
can be asked, and this book is an important contribution to answering it.

I want to close with a comment on typos, some of which are significant. I have
already mentioned the erroneous tables in the chapter by F&S, apparently caused
by a typo at an early stage of analysis. In addition, the listed form for the
verb meaning ''sell'' in their table 2.7 appears to be missing an /n/, but this is
simply a typo and does not affect the analysis. Another important typo mars the
first instance of the formula for entropy in ABM's article on p. 63: the
variable x in the probabilities is in upper case rather than lower, which would
give the wrong result, since the upper case X stands for something else. The
correct version of the formula is given at the bottom of the page. A more
serious problem concerns their table 3.11 summarizing results; this is
referenced in the text but apparently omitted.  Goldsmith's explanation of Bayes
rule ((3) on p. 143) is not comprehensible as stated; it should read ''the
probability of a grammar, given our corpus, is closely related to the
probability of the corpus, given the grammar.'' 



REFERENCES

Cameron-Faulkner, Thea, and Andrew Carstairs-McCarthy. 2000. Stem Alternants as
Morphological Signata: Evidence from Blur Avoidance in Polish Nouns. Natural
Language and Linguistic Theory 18, no. 4: 813-835.

Carstairs-McCarthy, Andrew. 1994. Inflection Classes, Gender, and the Principle
of Contrast. Language 70, no. 4: 737-788.

Chomsky, Noam. 1966. Cartesian Linguistics: A Chapter in the History of
Rationalist Thought. Studies in Language. New York: Harper & Row.

Daciuk, Jan, Stoyan Mihov, Bruce W. Watson, and Richard E. Watson. 2000.
Incremental Construction of Minimal Acyclic Finite-State Automata. Computational
Linguistics 26: 3-16.

Dieterman, Julia I. 2008. Secondary palatalization in Isthmus Mixe: a phonetic
and phonological account. SIL e-Books. Dallas: SIL International.
http://www.sil.org/silepubs/Pubs/50951/50951_DietermanJ_Mixe_Palatalization.pdf.

Hyman, Larry M. 2002. Cyclicity and Base Non-Identity. In Sounds  and  Systems.
Studies  in  Structure  and  Change.  A  Festschrift   for  Theo  Vennemann, ed.
David  Restle and Dietmar  Zaefferer, 223-239. Trends in Linguistics.  Studies
and Monographs. Berlin: Mouton de Gruyter.

Pace, Wanda. 1990. Comaltepec Chinantec verb inflection. In Syllables, tone, and
verb paradigms, ed. William R. Merrifield and Calvin R. Rensch, 21-62. Studies
in Chinantec languages 4. Summer Institute of Linguistics and the University of
Texas at Arlington Publications in Linguistics.
http://www.sil.org/acpub/repository/24343.pdf.

Yip, Moira. 2002. Tone. Cambridge University Press.

ABOUT THE REVIEWER 

Dr. Maxwell is a researcher in computational morphology and other
computational resources for low density languages, at the Center for
Advanced Study of Language at the University of Maryland.  He has also
worked on endangered languages of Ecuador and Colombia, with the Summer
Institute of Linguistics.





-----------------------------------------------------------
LINGUIST List: Vol-21-2803	

	



More information about the LINGUIST mailing list