rote vs rules

Brian MacWhinney macw at CMU.EDU
Wed Oct 14 23:34:12 UTC 1998


A few further comments on the current discussion of automaticity:

1.   Suzanne Kemmer's question was never answered.  She asked why rules
don't get applied to frequent forms, if they are so computationally
efficient.  The answer, I would suggest, is that computational efficiency
is defined over the whole system, not just the individual form.  You don't
save in terms of time to produce "jumped".  However, you don't have to
store all those pesky regular forms and, since the rules are running all
the time anyway, you "get jumped for free".  Of course, the real problem
here is that evidence for a cycle of rules of the SPE type is nonexistent.
So language-as-rules people like Pinker decided to give up the battle for
generating forms from minor rules and staked their claim on a defense of
what I call "kinder gentler rules" such as "add -ed".

2.  The analysis that Liz and others have proposed is basically what
MacWhinney 1978 and then Menn and MacWhinney 1983 offered as the
three-factor account based on rote, analogy, and combination.
Connectionism came along in the 1980s and showed how analogy works.  Rote
is obviously alive and well.  Combination has taken a few hits, but is
probably not down for the count.  It will get resurrected when
connectionist models become more neuronally realistic.  I don't think that
we will ever really need rules.  In fact, I doubt that Larry Barsalou
thinks we need rules of the SPE/cycle variety.

3.  I agree with Joyce that language is a skill.  However, the devil is in
the details.  If we fail to recognize the fundamental difference between
word learning and syntactic automatization, I am worried that we could go
down some false paths.  The routinization of the word is supported by a
tightly predictive association between audition and articulation.  When we
hear a new auditory form, it appears that we use the phonological loop on
some level to store it.  As we then attempt to match to this form using our
own articulations, we convert a resonant auditory form to an entrenched
articulatory form.  Work by Baddeley, Gupta, Cowan and others has taught us
a great deal about the details of this process.  Yes, you can use ACT-R to
model this, but you will be using a restricted subset of ACT-R and the
process of deciding what restrictive subset is applicable is the whole of
the scientific process of understanding the neuronal mechanics of word
learning.

Trying to use a model of word learning as the basis for understanding the
automatization of syntactic patterns strikes me as quite problematic.  The
central problem is that predicates have open slots for arguments.  Words,
as Wally notes, are largely slot-free (of course there are exception, such
as infixes etc.).  I tend to think of this level of skill automaticity in
terms of Michael Jordan faking out Karl Malone in the last points of the
final game of the NBA finals.  Jordan clearly has a flexible set of plans
for dunking the ball into the basket against the opposition of a defender.
What is automatic in his actions is the move from one state to the next.
The skill is in the transitions.  It strikes me that sentence production is
like this and that word level articulation is basically not.

Saying that we have stored syntactic frames tends to obscure this contrast.
The claim is typically grounded on results from a nice set of studies from
Bock and her colleagues.  But I would suggest that these studies do not
demonstrate syntactic persistence, but rather lexical persistence produces
priming of closely competing syntactic options.  Barbara Luka presented a
nice paper on syntactic persistence at CSLD-4 and mentioned work by Joyce
demonstrating similar effects.  However, I don't think this work has yet
yielded a clear view of what syntactic persistence really might be.  Is it
a genre effect?  Does it involve a passive taperecorder that influences
acceptability, but has no direct effect on production?  Is it really
lexically driven?  Many questions remain.

I would say that the delineation of the contrast between lexical and
syntactic automaticity and productivity should be a top-level research
agenda item for functionalists and psycholinguists alike.  The great thing
about all of this is that the issues are easily open to experimentation and
modeling.  And, as Joan Bybee, Tom Givon, and others have been showing,
they make clear predictions regarding typology and change.

--Brian MacWhinney



More information about the Funknet mailing list