Competence vs. Performance: Summary

Ken Wexler wexler at MIT.EDU
Thu Oct 18 16:06:27 UTC 2007


Dear Friends,

Sigh. Every bone in my body says to not reply to this discussion, 
generated in emotion, and so often  (certainly not uniformly) lacking 
factual basis and reasoned discourse, especially whenever people 
start discussing Noam Chomsky.  The question of the current influence 
and legacy of Noam Chomsky, the man whose influence is felt in every 
corner of the scientific study of language today (and much of 
cognitive science more generally), is just silly, and that issue 
doesn't need me to comment on it. But the attack on the field of 
language acquisition, while equally without basis, equally 
non-factual and non-reasoned, might confuse students. I know that 
those around in the 1970's, who reacted strongly to the approach of 
generative (i.e. scientific) grammar, who are still making the same 
false or irrelevant claims today, will not be convinced by empirical 
reality and logical argumentation. But I worry about the students, or 
the younger researchers and professors. Are they confused by all 
this? Do they really think that it's true that the field of language 
acquisition is so bad?  That it hasn't achieved significant results? 
That its methodology is mostly bad? That is hasn't made inroads and 
helped to influence the course of the study of language disabilities? 
That it ignores pragmatics and processing? It wasn't my impression 
that anybody could possibly believe these false claims. It's true 
that most of the responders (like me) don't seem particularly young. 
Nevertheless, it might be worth a reply for the young people. 
Finally, reading John Limber's rational, common-sensical and calm 
reply tipped the balance for me. (Since then there have been other 
reasonable responses, including Carson Schutze's and Gary Marcus.") 
It might be useful to say something. So this is for the young people 
who don't know the field. Welcome to the field. I apologize that this 
will have to go on at some length.

I'll only mention a few of the responses, and touch on a few of the 
issues. Mostly I'd like to defend the field of language acquisition 
and to discuss its current status. I'll also take a shot at trying to 
figure out why all the emotion and false beliefs.

On the competence/performance distinction, as John and  some others 
point out, it is inescapable, if one is to do serious scientific 
work. A summary of the discussion said there were no defenders of the 
distinction. (Since I wrote most of this, others have come forth). 
That can only be because the assumption is so wide-spread and basic, 
so crucial to advancements in the field, that by far most actual 
workers in the field find it inconceivable for the most part to 
question it and just find the question non-productive (I'll discuss 
exceptions soon).

We should look at questions like this in a wider scientific context. 
The competence/performance distinction isn't just something used in 
the study of language. It is basic to modern cognitive science. 
Possibly the clearest general statement of the foundational view on 
this is in the work of the great late vision scientist David Marr 
(for whom the Marr Prize for best student paper in the (US) Cognitive 
Science Society is named.) I suggest the first chapter of his book 
for background reading on this issue, with his very clear analogy to 
a cash register. Marr argued that to study completely any cognitive 
system, 4 levels of analysis were necessary. The theory of the 
computation, the representational level, the algorithmic level and 
the implementational (for humans, this is the biological) level. The 
theory of the computation was a necessary part. Marr explicitly (in 
other papers, too, in a somewhat detailed manner) pointed out that 
the theory of the computation was like linguistic theory. The 
representational theory is obvious - what are the types of 
representations? A given computational theory might have different 
representational types. The algorithmic level is similar to what we 
usually call the "processing" theory, and the implementational level 
is the biological, physical instantiation (for a computer silicon, 
etc.).

Both linguistic theory and the sub-field called language acquisition 
are constantly discussing all of these levels. If more progress has 
been made on the higher levels, so that less (but not nothing) is 
known about biological implementation, this is probably due to the 
difficulty of the field, especially the experimental difficulty given 
that we are working with humans.  We can't for good ethical reason do 
the obvious experiments. But it was inconceivable to Marr and to much 
of modern cognitive science that there would be no theory of the 
computation.

One might think of the competence/performance distinction, in these 
somewhat more detailed terms, as Marr's distinction between the 
computational level and the algorithmic level.

So it's true that most of the contemporary field of language 
acquisition works within such a framework, and major advances have 
been made, in my opinion. (I'll return to this).

O.k., so let's consider alternatives. What actually occurring 
approach has denied this distinction, and how has that approach 
fared? Well, behaviorism denies the approach. For, say, Skinner, 
there was no such distinction. How has Skinner fared in explaining 
acquisition phenomena, in generating empirical research on language 
acquisition, etc.? Surely no answer from me is required.

In a more contemporary manner, an approach that's often called 
"Connectionism" also usually (not always, I think) denies the 
distinction. So that, for example, theories of "Distributed Learning" 
attempt to explain phenomena by a model that is supposed to be 
somewhat related to a model of neurons working together (though with 
serious deviations from the real biology very often) and "learning" 
different weights, and this was supposed to explain phenomena without 
appealing to a competence/performance distinction. This approach was 
quite active in cognitive science for, what, 25 years? It attempted 
to replace the original cognitive science approach that was 
consistent with Marr's analysis. Obviously this isn't the place to 
summarize the achievements of this approach, and I am not expert 
enough to do that anyway. But let me give my impressions.

First, is this the approach that the critics of language acquisition, 
Dan Slobin, Robin Campbell, have in mind? Is this what they're 
pushing? I don't THINK it is, but if it isn't, what approach do they 
actually have in mind? What serious progress has been made using an 
approach that doesn't include the distinction?

Second, and this is only impressionistic and anecdotal, but I do have 
the impression that Connectionism in this strong form is pretty much 
dying away. Because it didn't work well enough. (I'll return to what 
might be replacing it). I have the impression that many major figures 
in Psychology who adopted this approach to learning for a long time, 
are feeling it hasn't succeeded well enough, and are looking for an 
alternative (I think I know what that alternative is, and will 
discuss it soon). Please remember that I am trained (as a grad 
student) in experimental psychology and mathematical psychology, work 
in Brain and Cognitive Science at MIT where there are many cognitive 
scientists, and know many of the senior figures, in some cases having 
gone to grad school with them. On the basis of this experience, I see 
a less active push for replacing representational approaches 
completely by connectionist modeling. I could be wrong, and it's not 
a necessary point, but that's my impression.

In fact, what is replacing the hope of the learning theorists, as 
opposed to Connectionism? It seems to be Bayesian inference. Most of 
the learning theory energy, including "statistical" learning theory 
seems to be focused around that type of model. But, of course, 
Bayesian inference is completely compatible with Marr's analysis, it 
in fact finds it necessary. There must be a set of "hypotheses" that 
are considered and selected via the learning mechanism (Bayes' Rule, 
in some computational form). Thus the work of my colleague Josh 
Tennenbaum is completely compatible with these notions. I can't 
imagine how such an approach would attempt to get rid of the 
distinction between performance/processing and competence.

Or consider the statistical approach of Charles Yang, PHD computer 
science at MIT,  now at Yale, who uses a kind of statistical learning 
theory to help to explain how grammars are selected. Obviously such 
an approach uses the competence/performance distinction.

It goes without saying that the classic mathematical 
learning/learnability theory for language, what's been called 
"learnability theory", a field I played some role in, e.g. my (w/ 
Hamburger and Culicover) "Degree-2 Theory")  assumes the approach. It 
too used some statistical considerations.  Its culminating Theory 9 
of chapter 4 proved learnability in the limit with probability 1. But 
it didn't deny the crucial distinction between competence and 
performance.

Or how about computational approaches to parameter-setting? Gibson 
and Wexler (was it in Language? I think Linguistic Inquiry). The 
distinction was there. But then how about some of the responses that 
argued against their approach and for other ways of learning, think 
Elan Dresher on cue theory or Janet Fodor on her version of cue 
theory. Ditto, they would use the competence/performance distinction. 
The point being that serious computational/mathematical approaches to 
language acquisition use this distinction, it's natural for them. And 
hard to find much that doesn't. A few attempts at small problems in 
connectionism, yes, but not a serious sustained attack on 
learnability. My point is not to argue for any particular approach, 
rather to show the great usefulness of the distinction for those who 
have serious interests in learning. Language is the cognitive system 
par excellence in which to study learning because there are so 
obviously events of learning; there is cross-linguistic variation, so 
there must be these effects of experience. It is harder to maintain 
for sure in other cognitive systems that they have any serious 
learning component, at least it is a more subtle question. In 
language it is obvious. That is why we have such serious attempts to 
study learning. There are lots of reasons to say that to date the 
attempts are inadequate, but I know of no reason to think that the 
distinction of competence and performance is a problem rather than a 
help.

These are the kinds of learning theories that people actually work on 
and produce results with (not good enough yet, it should go without 
saying). In summary, actual computational theories of learning, using 
statistical and other information, have traditionally assumed the 
competence/performance distinction and seem to be mostly doing so 
today. The attempt to do without such a distinction led to limited 
results that seem to not really be useful for the underlying problem. 
(Note that this is not to say that neural models couldn't be useful; 
they might in principle be quite useful for helping to explain one 
level of the puzzle).

I suspect that these arguments about what actually happens in 
computational learning theory might miss the point of what's 
bothering some of the critics. They don't actually work in learning 
theory and probably don't much like computational/formal 
considerations, so I suspect that actually bringing up real models 
under discussion won't convince them. But if anybody is interested in 
a clear analysis of the actual problem of language learning, and 
wants to know how learning theory works, real models should be under 
discussion. I note a resurgence in attempts at computational models 
of learning grammar, for those parts that must be learned. This is 
welcome. The competence/performance distinction is fundamental to 
these approaches.

Third, most approaches that didn't assume the competence/performance 
distinction didn't actually explain any real empirical phenomena in 
development. It is amazing how many papers have been published 
arguing that there was only one level, more or less the 
neural-network level, and that all of language was "learned" and 
provided as the only empirical basis some experiment on adults. The 
crucial empirical data of , say, grammatical development weren't 
discussed. It's not as if there aren't quite a few quite significant, 
quite reliable and general data (I'll return to this). But these were 
simply roundly ignored and mostly the claims were made on the basis 
of a few studies with college sophomores. Those of you interested in 
language development, are you satisfied with this?

Possibly the largest exception was the well-known debate on what 
explains past tense overregularization by young children, did one 
need a representational theory or was connectionism enough? Some 
empirical developmental data came under discussion, even by the 
connectionist camp. I won't get into the details of this difficult 
discussion, and one that I'm not an expert on, but only point out 
that it's a terribly small and non-representative part of the 
language acquisition problem. It just doesn't cut it, with the wide 
range of general phenomena that are understood, to discuss a very 
small part of the problem and believe that that provides a general 
answer. Nevertheless, at least in this discussion, some developmental 
data were actually used by the connectionists. But the harder 
problems, the ones that are more obviously difficult to account for 
via "frequency" arguments, weren't touched.

In summary, connectionist approaches didn't take over the field, 
didn't make contact with most of the problems, didn't make contact 
with most of the important empirical results concerning linguistic 
development and possibly are of waning interest today even to the 
practitioners. (All of this independent of the question of whether 
neural models are useful. Why shouldn't they be?) But I must admit, I 
don't THINK this is what people like Dan Slobin and Robin Campbell 
have in mind, anyway. But since there was a very active movement to 
pursue this line of thought, denying the competence/performance 
distinction, and since it was the most active field that I know of 
working on learning and pursuing the lack of a distinction and where 
it went, I thought it deserved some attention.

Joe Stemberger writes "that the exact division between what is 
competence and what is performance, as well as the criteria that 
distinguish them, are largely unknown after more than 40 years". Of 
course, the "exact division" is not known ahead of time, and 
"criteria" aren't something that's completely given. What is 
competence and what is performance is an empirical matter, a matter 
that can and should be studied empirically. That's how it works in 
cognitive science; why shouldn't it work that way in the sub-branch 
called language acquisition? Criteria are "known" as a question of 
theory and experiment, and we do have a good idea about them, without 
a lot of disagreement. Thus "memory" considerations are called 
"performance" and "knowledge" considerations are called "competence." 
Of course, which phenomena are which is a matter for the field to 
decide empirically. That's the way science works, that's the way it's 
done in the study of vision,  or of cognitive development, etc. 
that's what Marr of course would have envisioned.

Let's take a real example of how empirical issues can be used to 
argue for competence versus performance, an example well known in the 
field, even classic.  (John Limber has already given one, here's 
another one). On the basis of empirical studies of child speech, Nina 
Hyams proposed that children mis-set the null-subject parameter, thus 
explaining the frequent lack of subjects in child speech in English. 
Her theory thus was a matter of knowledge; she hypothesized that 
children (some age before 3) had the wrong (i.e. non-adult) knowledge 
about English. This is competence, all agree, I would think. (This 
doesn't imply that the reason for the lack of knowledge isn't 
performance, it might be, but that's another issue). Now, in the 
first language acquisition class I taught at MIT, Paul Bloom heard me 
lecture on this, and thought it was wrong. He thought that it was 
more likely that kids omitted subjects because of some kind of memory 
bottleneck. Paul thought, and all agreed, so far as I know, that if 
the productions  with missing null subjects were explained as a 
production problem due to a certain kind of memory bottleneck, the 
explanation would be that kids at this age had a performance 
limitation. So far as I know, there is no dispute that memory 
bottlenecks are performance and parameter values are competence. Both 
sides in the controversy over which is correct agreed on this. The 
issue was, what is the true, empirically true, explanation of the 
missing subjects? It wasn't a dispute over whether there is such a 
thing as competence and performance; both sides agreed with this 
rational foundation for the field.

To his great credit, Paul didn't just say oh, it must be performance, 
it must be memory. Rather he developed a model that would explain the 
subject omission as a matter of memory, and made predictions from 
that model, relating VP size to rate of subject omission and use of 
pronominal versus lexical subjects. He argued that his memory 
bottleneck model could predict this data and that the missing 
competence theory in this case couldn't predict the data. Thus he 
made an empirical argument that missing subjects in children were a 
matter of performance.

I wasn't convinced by this argument. And it wasn't because I thought 
all explanations of child behavior must involve competence; after 
all, it was a traditional argument for many non-adult phenomena that 
it was a performance limitation that explained them. That's one of 
the motivations, presumably, for Paul's analysis. He wanted to 
maintain that kids had the correct knowledge, thus it had to be 
performance that explained the facts.

But I didn't simply state that we know it's competence, how could one 
state that? So I kept thinking about it, as did Nina Hyams, and we 
realized that the memory model that Paul came up with actually gave 
the wrong predictions, especially if one expanded the data base and 
looked at the pronominal/empty subject trade-off and how it changed 
as kids grew older. We also made an attempt to explain Paul's 
observations. On the basis of these empirical data, Nina and I wrote 
a paper arguing that in fact the subject omissions were due to a 
grammatical process, that it was indeed competence.

I've left out all the interesting details. The interested reader can 
find both papers in Linguistic Inquiry, as well as another reply from 
Paul. I also believe that further research has pretty clearly 
demonstrated that null-subjects in non-null-subject languages in kids 
are due to a grammatical process, although not Hyams' original 
analysis (something to which she agrees). Thus the great prevalence 
of null subjects in other child languages with Optional Infinitives 
as opposed to finite utterances (though not exclusively, a fact that 
must be explained) argues in favor of the null subjects being allowed 
because infinitivals typically take null subjects. This was the 
hypothesis I came up with on the basis of much research on Optional 
Infinitives and null subjects. Both Hyams' original assumption about 
a mis-set parameter and my proposal that most null-subjects are due 
to the infinitival nature of the verb (given the fact that 
infinitivals typically take null-subjects) are claims that it is 
competence that is predicting the null-subjects in kids.



My point here is not to argue for a particular model; there is a huge 
and fascinating array of work on the topic, and although it's pretty 
clear that the consensus among most of those who actually work on the 
topic is that it is a grammatical phenomenon, based on all this 
empirical data, my point is simply to point out that it's an 
empirical issue, to which all  who actually work on the topic agree. 
The empirical point of view works. In general empirical results 
should rule, I'm a great empiricist in this regard, and believe that 
all the complaining people  do isn't based on empirical reasoning. 
Just look at Robin's original complain or Dan's response.. They are 
simply ignoring the empirical results and telling us what they'd like 
to see.

Think of what several of the critics have said, that work should 
proceed, e.g. Anant Ninio write, "I agree with Joe that we should 
simply proceed with doing research and collect information on what 
people actually say." Calls for doing research, in a new paradigm 
(what is that paradigm?). The field as it exists, the 
generative-based field, has a large numbers of results, real analysis 
and real empirical results, and something to learn as one thinks 
about research. Yes, it's work, but doing science is work. Isn't this 
the way to begin research in language acquisition? If, 50 years after 
the modern founding of the field, the anti competence/performance 
people are calling for rolling up our sleeves and doing some research 
in the new framework (what is it?), shouldn't it be thought that this 
framework, whatever it is, doesn't easily allow work to proceed? 
Let's see some examples first, then we can discuss whether it's 
possible to have a non-behaviorist formulation of a science without 
the relevant distinctions, as Anant Ninio claims is possible, without 
telling us what it is. Gary Marcus rightly points out that this is, 
be definition, behaviorism, and Carson Schutze points out that 
sometimes we model the competence/performance distinction by analogy 
to other distinctions that we agree on. We can't do tremendously 
better than that until we have better performance models, but it is 
something the field works on. And very often the distinctions are 
perfectly clear enough as in the null subject example I have 
discussed.

Performance or competence for any particular phenomenon? An empirical 
issue. End of story. The rest is in the details.

Putting aside the competence/performance distinction, what I was most 
bothered about in this discussion were the statements that language 
acquisition is in bad shape, hasn't progressed far enough, has far 
too small sample sizes (really? Always?). Anybody who actually knows, 
say, the generative-grammar based field of language acquisition knows 
that, although sometimes methodological critiques apply, there are 
also excellent studies, from a methodological point of view. In fact, 
I think it straightforward to maintain that the increase in 
generative grammar based studies in acquisition greatly increased the 
empirical and quantitative sophistication of these studies. 
Pre-generative studies all too often based their empirical findings 
on an observed example, with no idea how general it was.

Consider, for example, Dan's Slobin important series of edited books 
on cross-linguistic acquisition. I think this is a valuable 
contribution, and a work I often turn to when I know nothing about a 
language and especially, when there are no more detailed studies 
available. Much of the contributions base their arguments on 
observations of examples, without serious quantitative study. This is 
definitely doesn't apply to every page of the books, or every author, 
and I don't want to tar all the contributions with the same brush. 
But it's often a frustrating book to read because one doesn't really 
know what the empirical facts are after reading the book. 
(Nevertheless, often valuable).

Now, let's look at one of the results about early acquisition that 
has come out of the generative-based literature. Consider the 
Optional Infinitive stage, say before 3 or so (depends on which 
language and which phenomenon, the details are studied). Consider the 
verb second languages like German or Dutch. The empirical 
generalization is that kids at a young age in this language often 
produce non-finite main verbs (completely different from what adults 
typically do; adults do this only occasionally, for special semantic 
purposes, exhortatives and so on). But in German or Dutch, when the 
kid produces a finite verb it strongly tends to be in second 
position; when the kid produces a (ungrammatical for most part, in 
adult language) non-finite  root verb, it strongly tends to be in 
final position. Care is taken to make sure that we can tell for sure; 
there must be 3 constituents available for analysis so that we know 
if it's 2nd or final.

When Poeppel and I (Language) published our first (German) study on 
this, there was one kid's data analyzed. Tremendously small, as we of 
course knew then. But the data was close to perfect; it wasn't a 
question of a statistical tendency, rather there were only a small 
number of observations in the "off-diagonals."

So this data is very regular, in all studies, so far as I know. What 
the data looks like, I argued, is more like psychophysical data. We 
got a large amount of data on one child (though much smaller than in 
later studies) and analyzed the heck out of it and showed regular 
results. This is typically how the field of psychophysics works, 1, 
or 2 or 3 subjects. It's because of the regularity of the data.

Nevertheless, we knew that it wasn't enough, because we had to make 
sure that we didn't have an unusual kid. The field progressed by 
studying large numbers of kids and analyzing the heck out of their 
data. So, Jeannette Schaeffer, Gerard Bol and I produced a study of 
TD  (typically developing) and SLI (Specific Language Impairment) 
kids in Dutch,  at the appropriate ages)published in Syntax a couple 
of years ago, based on data Bol had collected years ago. If I recall, 
we had 40 TD kids in the sample, 20 SLI kids. There were thousands of 
TD utterances over all. And plenty of Optional Infinitives (we 
measured the rate as a function of age, see the paper). The 
finiteness/word order correlation (finite in 2nd position, non-finite 
in final) was almost perfect, something like 1% off-diagonal 
observations, out of a few thousand  TD utterances. (Perhaps 2,000? 
Doing this from memory; see the paper).

Remarkably regular results. For the most part this phenomenon hadn't 
been studied in pre-generative times and surprised everybody. 
Certainly the beauty and regularity of the empirical phenomena 
weren't known. And certainly there were no studies of the 
quantitative detail of this type of study. (For another example, see 
Amy Pierce's book (first an MIT dissertation) of the pas ('not') verb 
versus verb pas correlation depending on finite or non-finite verb. 
Her work used Patsy Lightbown's data, Patsy having been a student of 
Lois Bloom.  And analyzed the heck out of the data. Lois of course 
claims that MIT people are only interested in "theory" and not 
empirical results, a total falsehood, based on ignoring study after 
study, experiment after experiment, paper after paper. This is 
another of those 1970's falsehoods, repeated as if it were still 
then, as I say false even then, way before I was at MIT, ignoring for 
example, the important  early founding book of modern experimental 
psycholinguistics, including acquisition and processing, by Fodor, 
Bever and Katz. Urban myths die hard, especially when there are those 
interested in perpetuating them.

(While I am at it, Dan Slobin, surely you know better. This is 2007. 
You were disappointed in the lack of semantics in generative grammar? 
You have GOT to be kidding. There is a HUGE and important study of 
semantics in generative grammar, at MIT and many, many other places. 
In fact, it's clear that the generative approach is the dominant 
approach to semantics. There's hardly anything else, so far as I 
know. How could it be otherwise, since generative mostly means 
"scientific, explicit"? Dan, what you must be saying is that you must 
have a hunch there is another approach to semantics, what else could 
you be saying. But semanticists don't think so, almost completely.).

Back to the empirical situation of modern day language acquisition, 
generative based language acquisition. These results on finiteness 
versus word order are just the beginning, there are many, many other 
results, quite regular, quite known in many cases. They are very 
important to understand and explain and are the basis for some fields 
of language acquisition. The methodology is excellent, the 
transcripts and observations are done (in most cases) with care, the 
amount of data is huge by the standards of many other parts of 
psychology, and the regularity of the results exceeds almost 
everything I know in cognitive science (with the exception of some 
fields of perception), more than anything else in what we call higher 
level cognition. Brian, you want bigger sample sizes? So provide 
them. THESE studies often have larger sample sizes, and of course, 
the bigger the better. But we have regular results.

I suspect that what some people who are complaining don't like is 
that we have been so successful; the empirical data are quantitative 
and regular, the theories are explicit, the number of people wanting 
to study this stuff and do it is large. We are thriving. What's the 
problem? Could it be better? Surely. I'm constantly complaining, 
because I want us to be better. But evaluate us poorly in comparison 
to other parts of psychology? Surely you are jesting. (See my 
"Lenneberg's Dream" where I say that this part, at least, of language 
acquisition has the "smell of science" and that the data feel more 
like chemistry than like psychology). Good psychologists, in 
cognitive development for example, by the way, share my belief in the 
wondrousness of our field's data; they only wish they had data like 
that. At least the ones who know about it.

There are many, many more things that aren't known than are known. 
There are major puzzles. There are lots of parts of the field where 
the data isn't regular and we're puzzled. But the field keeps 
attempting to increase its empirical knowledge base, doing better and 
better, while always paying attention to theoretical questions. How 
else could science work?

This is true in the more experimental as well as naturalistic data 
side, too. We have a much better idea of the time course of 
development of, oh I don't know, many things, say verbal passive in 
English in the work of Christopher Hirsch and myself (there are many 
other examples, I'm just thinking locally for speed). Much of this 
work started with observations of people who weren't particularly 
explicitly generative but who used some ideas about language that 
linguists study. To take one example (besides passive), the 
development of the semantics of determiners got a major start in the 
important experiments of Mike Maratsos and Annette Karmiloff-Smith. 
Neither a friend of generative grammar, but both speaking in a 
language that is familiar to those who study the semantics of 
determiners in generative grammar. Are the slow developments, the 
errors, syntactic or semantic or pragmatic or performance-based? 
These are important questions, approached in a very active discussion 
including people who are generative-grammar based. The "egocentric" 
theory has a place, and there are challenges. But this is just 
science. Go to the BU meeting, say, and you'll find active debate and 
new experiments. 

One of the complaints in the postings is that "generative" approaches 
somehow exclude pragmatic considerations. How could this possibly be 
believed by anybody who knows anything of the field? My great 
semantics/pragmatics colleague Irene Heim has co-taught some seminars 
in acquisition with me in recent years. Her famous and classic work 
on reference one might think of as being more pragmatic than semantic 
(obviously it is both). We  (the field) are constantly talking about 
pragmatic considerations in development. Look back at the Principle B 
acquisition related to pragmatic deficiency work I spent so many 
years doing, and its relation to say, pragmatic difficulties in the 
development of determiners that Sergey Avrutin and I brought forth, 
and to Sergey's major continuation and expansion of lines of research 
relating to discourse and pragmatics. Or the debate between Tanya 
Reinhart and Yossi Grodzinsky on the one hand, versus me (and Rozz 
Thornton sometimes) on the other hand, on the Principle B pronoun 
errors. I think it's a pragmatic problem, they think it's a 
processing problem. Yes, these can be distinguished in principle, 
though it is hard work trying to find distinct empirical predictions. 
It is an important thing to work on though. It's not so important as 
to who is right (though clearly we want to know the answer). But what 
is crucial is the scientific attempt to find empirical phenomena to 
distinguish hypotheses. Or consider a paper Jeannette Schaeffer wrote 
attempting to understand whether a certain SLI phenomenon was 
pragmatic or syntactic, i.e. where the deficiency was. The point 
again, was that it was an empirical issue.

There is so much else in pragmatic development that has been done and 
is being done. How about the very nice beginning (at least) 
literature on scalar implicature? The contributions from Penn, lots 
of on-going work, including theoretical considerations from Danny Fox 
about pragmatic versus syntactic contributions. Or how about Stephen 
Crain's pretty current experiments showing that kids at a certain age 
know what is usually taken to be the "semantic" definition of "or" 
(inclusionary) and understand the usual exclusionary interpretation 
as a scalar implicature, a pragmatic effect? One can go on and on. 
Probably there is less so far in pragmatics than in syntax in 
development, but it just needs people to work on it. No issue in 
principle, hard to see what it could be.

Somebody claimed the field was missing an opportunity to say 
something about language impairment. Are you kidding? Do you know 
about the field of language impairment and how work has proceeded? Do 
you know about the generative-based papers in the major journal 
(American, at any rate) in the field, the Journal of Speech, Hearing 
and Language Impairment? Do you knot know about the major role of 
Rice and my Extended Optional Infinitive phenomenon (based on the TD 
work I've briefly described) in this journal and this field. (Rice is 
a Professor of Speech, an expert on SLI). Of course, some people 
might want to question that hypothesis, and you find other experts on 
SLI, professors of speech, e.g. Larry Leonard, who argue that the OI 
phenomenon isn't enough, perhaps in some cases wrong, though more of 
what he says agrees with it than disagrees with it. I think it's 
still right as a major phenomenon in English, the data mostly 
corroborate it, and Leonard misses the wonderful explanations that 
the field has produced for e.g. why English has more OI's than Dutch 
does (see the paper in Syntax by Schaeffer, Bol and me that I 
referred to previously).  Leonard, too, uses some aspects of 
generative grammar.

Thus the most important move in the study of SLI, in my opinion, is 
the large extent to which generative grammar based developmental 
results are under discussion; this is not the place to argue for a 
particular theory, though I have in many papers in the impairment 
literature. Or consider Alex Perovic and my paper that just appeared 
in Clinical Linguistics and Phonetics on certain grammatical 
phenomena in Williams syndrome (raising and binding). Generative 
grammar based empirical language development results have had a major 
impact on the study of language impairment, and many people in the 
speech community welcome these results. At the same time, what has 
been discovered in the speech/impairment community has greatly helped 
us in our attempt to study TD language development in the generative 
framework. What has really happened over the last 15 years or so is 
the way we're cooperating, across fields.

One other exciting example. Bishop, Adams and Norbury recently 
published in a genetics journal an exciting behavioral genetic study. 
They looked at various measures of language impairment in a large 
sample of identical and fraternal children with language impairment, 
identified at age 4, tested at 6. They used standardized tests, test 
of phonological working memory (non-word repetition (NWR), 
vocabulary, and a pre-publication version of the standard test on 
finiteness (based on all those years of research I and the field did 
on the optional infinitive stage plus our own impairment research) 
that Rice and I have now published (TEGI). Results? Amazing. 
Finiteness, measured by TEGI, had, if I recall, the largest 
heritability component, mostly inherited.  NWR was also very much 
inherited. Vocabulary very little inherited, as makes sense. 
Furthermore, in a major move, they did a DeFries-Fulker analysis that 
shows that the genetic source of finiteness was mostly independent of 
the genetic source of phonological working memory (NWR). Conclusion; 
most likely there are separate genetic systems for these 2 
disabilities. This very much helps to explain why some scientists get 
different results. There are probably at least 2 (not necessarily a 
huge number) of types of language impairment, one due to limited 
phonological working memory, one due to grammatical deficiency. This 
helps to explain a mystery, how could  phonological working memory 
deficiency explain the detailed grammatical phenomena we know to 
occur in SLI? E.g. lots of OI's with, for the most part, correct 
agreement, correct setting of parameters (as in SLI in Dutch, see 
e.g. the paper from Syntax above).

To Bishop's great credit she published these results and drew, albeit 
reluctantly, the correct empirical conclusions. Reluctantly because 
she has mostly thought that one general purpose psychological 
mechanism, like phonological working memory, could explain SLI. These 
results make it look as if some grammatical phenomena independent of 
this could explain parts of SLI. Bishop has not identified herself as 
part of the generative tradition and probably feels skeptical. But 
the empirical results point in that direction. Remarkable results. We 
should always be aware of the possibility that the finiteness results 
depend on something else, but we don't know what, and there are no 
proposals. (More precisely, there are computational level proposals, 
in particular my Unique Checking Constraint). We know that they DON'T 
depend on phonological working memory, both from earlier TD work and 
from this genetic research.

Wow! Doesn't that excite you? We are moving in on a genetics of 
language, some of which is related to linguistic structures. Look at 
the history of how this happened and what it says about how serious 
cooperation between fields and approaches, taking the issues 
seriously, can lead to results. I was lecturing in my first grad 
class at MIT around 1988 or 1989. I had described how some kinds of 
movement relations (A-chains) appear to be late (Borer and my 
hypothesis) in acquisition. Recently some work had appeared in syntax 
by Jean-Yves Pollack in French, building on earlier observations of 
Joe Emonds, concerning verb movement. This was not A-movement. I knew 
of no related acquisition research, and openly speculated on the 
question of whether young kids knew this type of movement, verb 
movement, giving the example of French verb finiteness and the order 
of pas, basing this suggestion on Pollack's papers. I guessed we 
wouldn't easily be able to observe the relevant phenomena, because 
(as John Limber and others have noted), subordinate clauses aren't 
there at the beginning. Amy Pierce, then a grad student, was in the 
class, as was Juergen Weissenborn, who was visiting MIT from Germany. 
They both went out and looked at data on French they had, and they 
both confirmed early knowledge of verb movement, the relation w/ pas. 
How could this be, I asked? Where were they observing infinitives? 
Well, Amy said, they're actually giving non-finite verbs in root 
position. So empirical results in acquisition (about passives and 
so-on), developmental theory (Borer and my work) and a new syntactic 
idea (Pollack) led to all this new work, up to then only in TD.

Then, of course, we expanded out to all sorts of language in my lab, 
and the OI stage was born. Others joined in, and we found some hints 
in earlier research too. Mabel Rice asked me what I thought of SLI, 
and how that could exist, hearing me talk, came to spend a semester 
in the lab, and we decided to investigate SLI together. Ultimately we 
developed a standardized test. Dorothy Bishop, students and 
colleagues used this in behavioral genetic research, presumably not 
caring much about syntax, but syntax, empirical developmental work on 
TD, and many other fields went into the background for all this.

This is how science works, by cooperation, by keeping an open mind, a 
rational mind, by not simply expressing emotion about how the field 
MUST be, but by calmly doing experiment and theory, making errors, 
correcting them. We are on the road to a genetics of language and 
hopefully much more, and the competence/performance distinction and 
the scientific (generative) approach is one of the crucial elements 
in all this.

If your ideology says no, o.k. say no. If you are more interested, as 
apparently Robin Campbell is, in what we have to say about 
literature, well, perhaps you should study literature.  Robin writes: 
"it's important to take stock, and the right question to ask is 
'Where are the good outcomes?'. Have the sick been healed? Are 
children better educated? Are there benefits to art or literature? "
(Though there ARE generative based things that have been said about 
literature. Still, if you have humanistic rather than scientific 
tendencies completely, then perhaps you should think about 
literature. Though I would argue we've learned much about the human 
species from the kind of work I've discussed).

It's true that we haven't healed kids, but I submit that the results 
of the field might be useful in helping to work with impaired 
children. Surely there is reason to believe that a genetics of 
language impairment would possibly help us in healing. That's how 
science works, slowly. Will we get there soon? Don't know, but we are 
making rather clear progress. What has another approach done?

Will we do imaging research? Will we attempt to understand what the 
brain is doing in all this. Yes, of course. Will we succeed? I don't 
know, but what else can you do?

In summary, language acquisition has made major progress, the 
scientific, (generative approach, including the 
competence/performance distinction) has been crucial to this, the 
field is thriving, and we go on and on and on.

So what is bothering all the critics? Clearly it isn't the state of 
the field, which is doing well. (For students out there who never 
heard of any of this, there is a good textbook for some of it at any 
rate, Teresa Guasti's. Ignore what your professors tell you about the 
field, especially if they try to argue against fundamental 
distinctions like competence and performance. Read this book and the 
papers).

In my Plenary talk at the BU conference a few years ago, I tried to 
ask how we were doing as a field, basing  my comments on Roger 
Brown's fond hope for the field to do well, given his disappointing 
experience in other fields of psychology that he worked in (see the 
preface or introduction to A First Language). I concluded that the 
field was doing pretty well, thank you, with major contributions both 
from linguistic theory and clear empirical methodology. People from 
various viewpoints, arguing from different sides of a hypothesis, 
were making progress by providing arguments and data.  I argued that 
theory and experiment were obvious and necessary features of the 
field, how could it be otherwise in science? My impression is that 
people felt drawn together, both those who identified themselves as 
generativists and those who didn't. We are all in this together, 
trying to understand. One fundamental assumption is that we want to 
be scientists. Of course, if this isn't shared, all bets are off. So 
how come there is such hostility?

Here's what might be happening, though now we're talking about 
sociological issues, and I feel on less certain grounds. The field of 
generative-based acquisition is thriving. Is it the mainstream 
approach? It's very hard to say when one is so active and involved in 
the field. A field has friends and enemies. But let me refer to what 
others say. Many who are against the generative approach have 
complained because this approach seems dominant, e.g. many complain 
about Chomsky's influence; we see it in the remarks that initiated 
this discussion. Liz Bates, may she rest in peace, made a career out 
of saying the generative-based approach was dominant, and complaining 
about it and trying to do something else. So our enemies think we're 
dominant, we're mainstream. How about our friends? As I say, it's 
very hard to tell. But I'll note that Teun Hoekstra and Bonnie 
Schwartz, in their 1993 edited book with the papers from the first 
acquisition workshop at GLOW (the major biannual generative 
linguistic theory conference in Europe) wrote that perhaps the 
"theoretical" [i.e. generative] approach finally, after all these 
years is dominant. And that was 1993; the generative approach has 
only grown since then. So PERHAPS (I admit that it's hard to know) 
the generative approach is dominant, is mainstream. That doesn't make 
it correct. God knows, there have been dominant approaches that were 
dead wrong.

But perhaps that counts for the emotion among those who haven't 
accepted it. PERHAPS (I agree I don't know) they don't like the 
results, the beautiful results, that generative based acquisition 
studies have attained, studies that go against their grain. Perhaps 
they don't like that so many students want to study generative based 
work. Perhaps they don't like the fact that the applications, to 
language impairment, to genetics, to many other fields, have been 
coming thick and fast, though still far too slow. Perhaps they just 
have the feeling that all this is wrong, and don't know what to do 
about it.

I know most of the people who have responded, have interacted with 
them. I have always wondered why they feel this way, so at odds with 
the facts. I don't know why the emotion is so hard. Perhaps I just 
feel lucky and happy that things have worked. Who knew all those 
years ago that it would work out? I for one only knew that it was 
science or nothing; we had to try. Perhaps others aren't happy with 
science coming into a cozy field that could be approached 
humanisitically or in some other way. Fair enough. I take it for 
granted that everybody is a good person. We have different interests, 
perhaps. You could say, Ken, why science? Is that ALL there is in the 
world? The answer is no. There is much, much else. I have no doubt I 
(and others who think in more or less the same way about the field) 
are lacking in much understanding that can be appreciated in other 
ways. I am happy for them to study things the way they want to. Why 
are they so unhappy that we do what we do?

Young people, again. If you want to be scientists and help to 
increase understanding of language development, please think about 
all this. If it doesn't excite you, o.k. My apologies for this 
lengthy tome; its only saving grace, perhaps, is that it could have 
been 100 times longer.



More information about the Info-childes mailing list