16.2233, Review: Cognitive Science/Ling Theories: Jenkins (2004)

LINGUIST List linguist at linguistlist.org
Fri Jul 22 07:41:12 UTC 2005


LINGUIST List: Vol-16-2233. Fri Jul 22 2005. ISSN: 1068 - 4875.

Subject: 16.2233, Review: Cognitive Science/Ling Theories: Jenkins (2004)

Moderators: Anthony Aristar, Wayne State U <aristar at linguistlist.org>
            Helen Aristar-Dry, Eastern Michigan U <hdry at linguistlist.org>
 
Reviews (reviews at linguistlist.org) 
        Sheila Dooley, U of Arizona  
        Terry Langendoen, U of Arizona  

Homepage: http://linguistlist.org/

The LINGUIST List is funded by Eastern Michigan University, Wayne
State University, and donations from subscribers and publishers.

Editor for this issue: Naomi Ogasawara <naomi at linguistlist.org>
================================================================  

What follows is a review or discussion note contributed to our 
Book Discussion Forum. We expect discussions to be informal and 
interactive; and the author of the book discussed is cordially 
invited to join in. If you are interested in leading a book 
discussion, look for books announced on LINGUIST as "available 
for review." Then contact Sheila Dooley at collberg at linguistlist.org. 

===========================Directory==============================  

1)
Date: 21-Jul-2005
From: Judit Gervain < gervain at sissa.it >
Subject: Variation and Universals in Biolinguistics 

	
-------------------------Message 1 ---------------------------------- 
Date: Fri, 22 Jul 2005 03:38:02
From: Judit Gervain < gervain at sissa.it >
Subject: Variation and Universals in Biolinguistics 
 

EDITOR: Jenkins, Lyle
TITLE: Variation and Universals in Biolinguistics
SERIES: North-Holland Linguistic Series, Linguistic Variations 62
PUBLISHER: Elsevier Ltd.
YEAR: 2004
Announced at http://linguistlist.org/issues/15/15-3577.html


Judit Gervain, Cognitive Neuroscience Sector, 
Scuola Internazionale di Studi Avanzati, Trieste, Italy

1. Introduction
Had Eric Lenneberg written his seminal book "Biological Foundations of 
Language" (1967) today, he most probably would have written something 
quite similar to the present volume. As the very frequent references 
suggest, the authors of this collective volume themselves were also very 
well aware of the Lennebergian heritage. Although many of Lenneberg's 
original proposals have been refuted or reformulated since, the general 
spirit of his ideas still continue to shape the way we think about the 
biological aspects of language. His ideas gave rise to what has become 
known today as the biolinguistic enterprise.

The volume offers a very fortunate selection of papers from the 
biolinguistic field, succeeding in blending general and easily accessible 
introductions to more biological or formal research areas that linguists 
and psychologists are less familiar with, such as the correlation between 
genetic and linguistic variation, or the state of the art in the study of 
human evolution, with in-depth analyses and detailed empirical reviews of 
controversial issues, such as the exact nature of the deficit underlying 
Specific Language Impairment (SLI), the mechanisms involved in language 
acquisition or the evolution of language. Many of the papers offer a 
rethinking of some of the received views in the domain. A common 
underlying theme running through all the articles is the attempt to 
achieve a complex understanding of the key issues, with an eye towards an 
ultimate unification with the biological sciences. This genuinely 
multidisciplinary approach gives a unique appeal to the volume.

2. What form should linguistic theory take?
Most of the papers explicitly or implicitly assume, as their theoretical 
background, the Principles and Parameters (P&P) model as most recently 
reformulated in the Minimalist Program (MP; Chomsky 1995). This model 
represents a radical attempt at creating a "minimal" theory of grammar, 
i.e. the most parsimonious possible, based on the underlying assumption 
that syntactic computations, i.e. the core of language, is a nearly 
perfect design to mediate between meaning and sound. Most of its 
properties are imposed by these two external systems, and syntax itself 
introduces as little of its own material as possible. This intuition, as 
it will become clear, has paved the way for a broad range of novel 
investigations, especially into the evolution of language. Therefore, it 
is crucial to understand the motivations behind and the evidence in favor 
of it.

This is precisely what Noam Chomsky's contribution "Language and Mind: 
Current Thoughts on Ancient Problems" does. First, in a historical 
perspective, he shows how the generative enterprise has proceeded from the 
empirical descriptions of particular constructions in particular languages 
to a more general, abstract and explanatory theory, the MP, ridding itself 
of unnecessary formal machinery. Although the idea of a nearly perfect 
syntax is, as Chomsky himself concedes, surprising, he also claims that 
the theory has not lost in empirical adequacy in the attempt, which 
constitutes a reason to prefer it over the previous, more redundant, less 
parsimonious models. This, however, is not uncontroversial. Pinker and 
Jackendoff (2005), for instance, has recently argued to the contrary, 
citing a certain range of phenomena that remained unaccounted for in the 
new framework. Ultimately, it is, of course, an empirical question whether 
the MP will succeed in equaling the former theories in empirical scope.

One relevant issue in this regard is what precise form the correct model 
should take. Building on his previous work, Richard Kayne sets out, in his 
present article "Antisymmetry and Japanese", to provide further empirical 
support for his "antisymmetry thesis". His main proposal (Kayne 1994) has 
been that contrary to apparent cross-linguistic variation, word order is 
underlyingly identical in all languages, conforming to the basic SVO (in 
more technical terms Specifier-Head-Complement) order. The reason why some 
languages exhibit surface orders different from this, e.g. SOV, is because 
several displacement operations apply to the underlying order in their 
syntactic derivations. If true, this surprising and empirically not 
uncontroversial claim provides substantial support to the idea that basic 
syntax is universal and conforms to the needs of the interfaces, in this 
case to the needs of the conceptual-intentional interface by reflecting 
the underlying order of elementary predication. Therefore, in his current 
contribution, Kayne investigates word order phenomena in Japanese, the 
paradigmatic example of a language that shows SOV order on the surface, 
and convincingly argues, by deriving Noun-Postposition and Subordinate 
clause-Complementizer orders through movement, that word order in these 
constructions is epiphenomenal.

3. Language acquisition from a biological perspective
The issue of distinguishing linguistic universals from cross-
linguistically varying properties bears direct relevance to the problem of 
language acquisition. One of the most fundamental commitments, indeed one 
of the original motivations, of the generative enterprise has been the 
claim that language cannot be learnt exclusively from the input (the 
logical problem of induction, also known as the argument from the poverty 
of the stimulus), therefore humans must possess a language learning device 
with a considerable amount of linguistic knowledge already hard-wired in 
it. This approach has been theoretically formulated in the P&P model, 
according to which the principles are universal and need no further 
specification during acquisition, while parameters define (binary) 
choices, the adequate values of which the learner has to set to model the 
target grammar. It has been proposed (e.g. Pinker 1984, Gibson and Wexler 
1994) that the exact mechanism by which parameters are set can be thought 
of as some kind direct triggering from the input, in which certain cues 
direct the learner's choice. As a corollary, it has been argued that 
triggering is fast and effortless, therefore parameters are set very early 
on (Wexler 1998). Under this view, young children's syntactic errors are 
attributable to maturational or "performance" factors (e.g. memory 
limitations in the case of long or complicated sentences etc.).

In the current volume, three articles address the issue of language 
acquisition, two of them, Wexler and Avrutin, proposing accounts that are 
compatible with this classical view, while the third one by Yang 
challenges some of the traditional assumptions from a 
computational/statistical point of view. In the chapter 
entitled "Lenneberg's Dream", Ken Wexler offers a comprehensive overview 
of the theoretical arguments and the empirical evidence for his Optional 
Infinitive (OI) stage theory. The original proposal derives from the 
observation that children's early production contains both finite, 
inflected adult-like verbal morphology and infinitival, non-inflected 
forms that alternate in the same sentential environments. Initially, the 
infinitival forms are dominant, but become less and less frequent with 
time. When inflected morphology appears, it is always grammatical; it is 
never used when an infinitive or a participle would be required, and the 
different inflected forms never replace each other randomly or 
ungrammatically. All these facts taken together imply, argues Wexler, that 
in the developmental phase that he calls the Optional Infinitive stage, 
children do possess the relevant grammatical knowledge, otherwise they 
would use the inflected forms randomly. However, they have difficulties 
exhibiting this knowledge in constructions that require the application of 
certain grammatical operations (movement/feature checking) twice. In 
English, this is precisely what is required to get inflection right. As a 
good control case for the hypothesis, Wexler presents data from languages 
such as Italian and Spanish, in which the same operation is not required 
to apply twice, only once. As the hypothesis predicts, in these languages, 
children never fail to mark the main verb of a sentence for the correct 
person and tense inflection. The constraint that children cannot perform 
certain operations twice, while adults can, can thus be seen as a 
maturational, i.e. performance limitation on an otherwise adult-like 
grammar. In this respect, then, Wexler's OI theory follows the traditional 
path of attributing children's errors to extra-linguistic factors. 
However, the scope of his hypothesis is enlarged by encompassing data from 
behavioral genetics and language pathologies. In accordance with the idea 
that the limitations underlying the OI stage are genetically determined 
maturational constraints, Wexler and colleagues found that the difference 
in the time of onset and the duration of the OI stage is only 3 weeks in 
monozygotic twins, as compared to 13 weeks in dizygotic ones. Moreover, 
investigating the errors English- and Dutch-speaking children with SLI 
make, Wexler observed that their patterns are very similar to those of 
normally developing children in the OI stage, only with poorer overall 
performance. Therefore, Wexler argues that in SLI children, the OI stage 
is maintained for a much longer period of time, sometimes not even fully 
obviated in adulthood. Under this view, then, the SLI grammar is claimed 
to be intact, and what is affected is the performance system.

A similar line of reasoning comparing the linguistic abilities of normally 
developing children and language pathology underlies Sergey Avrutin's 
contribution, entitled "Beyond Narrow Syntax". Starting out from the 
observation that young children interpret the use and distribution of 
pronominal forms differently from adults, Avrutin argues that the behavior 
of these referential elements (pronominals, anaphoric expressions, nouns 
preceded by determiners, introducing individuals into the discourse, as 
well as tense marking, introducing events into the discourse etc.) is 
governed not by syntax, but by what he calls `discourse', and defines as 
the interface between pure syntactic computation, i.e. narrow syntax and 
the conceptual-intentional (C-I) system. One empirical motivation to 
delegate referential elements to the C-I interface as opposed to syntax 
proper is the existence, even in normal adult language, of special 
registers such as `diary language' where the behavior of referential 
elements and tense marking is different from normal usage, e.g. omissions 
are possible in contexts were they are otherwise ungrammatical etc. 
Therefore, even the fully mature language faculty allows for the 
introduction of novel discourse entities through non-syntactic means. 
Interestingly, as Avrutin notes, determiners, pronominals and tense 
marking are precisely those grammatical elements that young children and 
aphasic patients very frequently omit in their linguistic production. 
Importantly, however, when they do employ them, their usage is 
grammatical. Consequently, Avrutin conjectures that the narrow syntax of 
language learners and aphasics is intact, but when the production of a 
certain structure becomes too resource-intensive, their computational 
system breaks down, and the discursive means of expression take over, 
giving rise, as in adult special registers, to characteristic omission 
patterns. Avrutin's model is another example of a theory that takes errors 
to be the symptoms of performance limitations on an otherwise normal 
grammar.

This view is challenged from a formal perspective in the chapter 
entitled "Toward a theory of language growth" by Charles Yang. He argues 
that if the triggering/parameter-setting view of acquisition is correct, 
children's production should change abruptly and categorically as a 
parameter is set in one way or in the other. Nevertheless, says Yang, this 
is not what we find in their productions. Rather, at least in the case of 
certain syntactic constructions, though not all, error rates are initially 
high, and decrease only gradually, without any sign of a dramatic drop 
that would be predicted by the (correct) setting of the relevant 
parameters. The author thus argues that the Very Early Parameter-Setting 
(VEPS) view proposed by Wexler (1998, current volume), according to which 
parameters are set correctly from the earliest observable age (around 18 
months), cannot hold true for all parameters (e.g. the second position of 
the verb in Germanic languages, the obligatory presence of the subject in 
English etc.), although he acknowledges that for some, it does (e.g. word 
order in English, verb raising in French etc.). Instead, he proposes to 
account for the gradual disappearance of errors as a selectional process 
at the beginning of which infants start out with the set of all humanly 
possible grammars, probabilistically choosing one of them for parsing each 
time a sentence comes in from the input. If the chosen grammar is 
compatible with the input sentence, its probability is increased, 
otherwise it is decreased. Over time, grammars with low probabilities are 
eliminated, and learners converge on the target grammar. In this model, 
children simultaneously entertain multiple grammars, and eliminate them 
gradually, which explains the observed error patterns. 

As empirical support for the model, Yang discusses how the obligatory 
presence of the subject in English (the non pro-drop property) is 
acquired. The world's languages exhibit three major patterns with respect 
to pro-drop. In languages like English, pro-drop is not allowed. In many 
other languages, such as Italian or Spanish, pronouns can be dropped if 
they are in subject position and agree with the inflected verb. In 
languages, like Chinese, subjects can be dropped if they are topicalized. 
While it is easy for an English-learning child to exclude the Italian-like 
pro-drop option, since it is readily visible in already a few input 
sentences that inflectional morphology is poor in English, the Chinese-
like grammar takes much more input to rule out. Therefore, the empirical 
prediction is that English-learning children's initial pro-drop errors, 
before they disappear gradually, will show a Chinese-like pattern. This is 
confirmed by empirical data. Let us note, however, that more evidence is 
needed before Yang's suggestions can be accepted, since a probabilistic 
selectional learning model has no way to explain why all children present 
with highly similar developmental paths (e.g. all make similar kinds of 
errors and produce the same kinds of constructions in the same 
chronological sequence), not only within one language, but also cross-
linguistically. If, at least at the beginning, children were to choose 
randomly from the thousands of possible grammars, wouldn't considerable 
divergence be expected? Also, would the development not heavily depend on 
the nature and the amount of input in such a framework? However, empirical 
evidence (Gleitman and Newport 1995, Wexler 1998, current volume) points 
to the contrary.

4. Language growth: Language change and language evolution
Models of language acquisition determine how language is transmitted from 
one generation to the next. Therefore, they are implicated in the account 
one can offer for how language changes over time, or even how language 
came about during human evolution. Recently, with the advent of the 
minimalist architecture of the language faculty, which has made empirical 
inquiry more simple, thus more feasible, the issues of language change and 
evolution have received increasing attention (e.g. Hauser, Chomsky and 
Fitch 2002).

Since evolutionary and even historical sources are scarce, Partha Niyogi 
in his "Phase Transition in Language Evolution" puts forth a computational 
approach formally modeling language change, with potential implications 
for language evolution. (As it might have already become apparent, 
Nigoyi's title is somewhat misleading, as he is not very strict about 
distinguishing language change from language evolution.) He proposes to 
treat language in time as a dynamical system. In each generation, 
grammatical variation is determined by a certain distribution of all the 
possible grammars, e.g. in a homogeneous population all speakers have to 
same grammar, in a heterogeneous population, some have grammar G1, others 
grammar G2, yet others maybe simultaneously both of them, etc. Language 
arises in the next generation through language acquisition, which is 
driven (within the logical space defined by UG) by the input received from 
the previous generation. This, in turn, is derived from the grammar 
distribution characterizing that generation. In such a model, grammatical 
change depends on two factors, the probability with which speakers of the 
older generation produce examples for a given construction, and the 
overall amount of input to the new generation. 

Crucially, argues Niyogi, gradual changes in these parameters do not 
produce linear changes in the dynamics of the system, rather as certain 
threshold values are crossed, phase transitions occur. Transitions that 
eliminate variation (e.g. a population going from having two grammars to 
having only one) always produce stable states, while transitions in the 
other direction are not probable, and happen only at vary high values of 
the two parameters, which may happen following external influence to the 
system, e.g. a massive influx of speakers with a different grammar. Thus, 
overall, variation seems to be eliminated, when present, and practically 
never emerges, if originally absent. Although this conclusion constitutes 
an important contribution to the on-going debate about the origin of 
language change (internal or external), it needs further confirmation for 
at least two reasons. Firstly, it goes against the initial variationist 
assumptions of the model. Secondly, and more importantly, it is not self-
evident how to interpret it in the light of known empirical observations 
of historical change, since variation does not always seem to get 
eliminated over time.

Another way of overcoming the lack of historical and evolutionary evidence 
is to look at cases where language emerges from nothing in the present 
times. Although such evens are rare, the creolization of pidgins and the 
emergence of sign languages are good cases in point. Judy Kegl's 
article, "Language Emergence in the Language-Ready Brain" gives a detailed 
description of the lexicon and grammar of American Sign Language and 
Nicaraguan Sign Language (Idioma de Senas de Nicaragua, ISN), and offers a 
comprehensive report about the emergence and current state of ISN. This is 
particularly interesting from a biolinguistic point of view, because ISN 
is a genuine case of language emergence de novo. The language was created 
by deaf children who came together in a special school in the early 1980s. 
Kegl's paper describes the Nicaraguan signer population in detail, showing 
that signers who learn ISN early, as a first language (L1), gain access to 
all the production rules encoded in lexical items, while late learners 
memorize them as frozen, monolithic elements. (In most sign languages, 
lexicalized signs are complex and reflect the composition rules of the 
language, a little bit like derivational morphology in spoken language 
reflects certain productive patterns.) In addition, Kegl analyzes ISN to 
show that elements come from one of three different sources. Some derive 
directly from simple home-signs and gestures used in the surrounding 
spoken language. Others are taken over from these sources, but are 
subsequently modified and put to a different use in ISN. Yet others do not 
have external origins, and constitute real innovations of ISN. 
Interestingly, as Kegl points out, these are precisely the elements used 
for grammatical purposes. Thus, she concludes that ISN constitutes strong 
evidence in favor of a genetic endowment for language, which can be 
triggered by minimal external input (home-signs and gestures from the 
spoken languages) to create a full language, because the missing 
grammatical system can be filled in by innate content.

Massimo Piatelli-Palmarini and Juan Uriagereka, in their paper 
entitled "Immune Syntax", explore the emergence of language from a more 
evolutionary, but at the same time more speculative perspective. Starting 
out from the minimalist idea of the perfection of syntax, they use the 
analogy of the immune system to claim that the imperfections of syntax, 
which are introduced by lexical and morphological variation, can be seen 
as some kind of a `virus', and syntactic computations work to eliminate 
them as fast as possible. On a biological level, they argue, the virus 
metaphor can be interpreted as syntax evolving through `horizontal' 
mechanisms, a typical example of which is when viral genetic material gets 
integrated into the DNA of the host, and then gets transmitted by regular 
vertical procedures to future generations. Examples of this are rare, but 
attested in our natural history, and the advantage of such a `horizontal' 
scenario, the authors argue, is that it operates on exactly the right time 
scale to explain the emergence of language, whereas most known vertical 
mechanisms cannot account for the recency, as well as the complexity of 
language. Nevertheless, as the authors themselves acknowledge, their 
hypothesis remains,  in its current formulation, an evolutionary just-so-
story.

Isabelle Dupanloup, in her contribution entitled "Genetic Differences and 
Language Affinities", explores a much less speculative approach to 
language evolution. She starts out by reviewing what is known about the 
evolution of the human line from the common primate ancestors up to the 
peopling of Europe during the Neolithic spread about 10 000 years ago. Her 
state-of-the art report is surely very useful for the non-biologically 
trained readers to better understand what issues are at stake, and what 
evidence we currently have that bears on these issues. This background 
set, she asks the question whether current genetic diversity correlates 
with linguistic variation. She finds that when genetic diversity is 
measured in terms of blood groups and protein variability, close 
correlations obtain (at least, in the populations of Africa, Asia, and 
Europe), but similarities between linguistic and genetic variation are 
much weaker, when genetic variation is measured at the DNA level. This 
suggests, she argues, that there is a common pattern of genetic and 
linguistic divergence during human history, indicated by proteins, blood 
groups and other allelic variants affected by genetic fluctuations as 
rapid as linguistic change. DNA sequences, on the other hand, are subject 
to slower evolutionary change, and still reflect an evolutionary stage in 
which languages had not yet diverged.

6. Specific Language Impairment under the microscope
One of the focal topics in relating language to the genetic endowment is 
the study of genetically-based language pathologies, among which SLI is of 
particular interest. The present volume contains several contributions on 
the issue.

Heather van der Lely ("Evidence for and Implications of a Domain-Specific 
Grammatical Deficit") sets the stage by arguing that G(rammatical-)SLI is 
a domain-specific deficit, since the affected children present with 
problems in the grammatical domain only, but importantly, not in audition, 
general cognition or non-grammatical language skills (pragmatics etc.). 
According to van der Lely's Representational Deficit for Dependent 
Relations (RDDR) account, G-SLI children have a pervasive deficit 
representing hierarchical structure, which surfaces in phonology, 
morphology and syntax alike, all three linguistic levels relying on 
hierarchical structural representations. More formally, in a minimalist 
framework, G-SLI children can be characterized as treating the movement 
operation as optional in cases where it is obligatory in the adult 
grammar. She presents experimental evidence from wh-movement (question 
formation) in English, Greek and Hebrew confirming the predictions. In her 
view, then, SLI affects the core of language.

The idea of the optionality of certain grammatical operations in SLI also 
underlie the other accounts of this pathology in the volume. However, 
these papers focus more closely on inflectional morphology. Nevertheless, 
they offer very different explanations.

Ken Wexler, as already mentioned, claims that SLI children are stuck in 
the Optional Infinitive stage during development. Since this phase is 
brought about by performance limitations on an otherwise adult-like 
grammatical system, Wexler, unlike van der Lely, argues that SLI does not 
affect the core of grammar.

Following up on Wexler's OI hypothesis, Laurence Leonard ("Exploring the 
Phenotype of Specific Language Impairment") observes that SLI children do 
indeed omit inflectional morphology. However, as he points out, they do it 
more often than linguistically matched normally developing children. 
Therefore, he argues that SLI children are more sensitive to the resource 
demands of sentence formation and the processing loads imposed by complex 
constructions. Supporting this view, he presents experimental results 
showing that children with SLI produce more uninflected forms in 
ditransitive than in simple transitive constructions.

Myrna Gopnik ("The Investigation of Genetic Dysphasia") gives a different 
analysis of optional morphology in SLI, or, in her terminology, genetic 
dysphasia. In her view, SLI children lack sublexical, that is 
morphological features in their linguistic representations. Therefore, 
they cannot decompose polymorphemic forms, they can only memorize them as 
unsegmented chunks. So whenever they produce a morphologically complex 
inflected form, they have retrieved it from memory. This view, she argues, 
is further confirmed by the observation that in morphologically rich 
languages such as Greek or Japanese, SLI children have problems not only 
with inflections, but also with allomorphoic variations of the roots.

7. Language and the brain
Bridging the gap between genetics and behavior, some of the contributions 
in the volume investigate the brain representations and functions 
underlying language.

Alfonso Caramazza and Kevin Shapiro report the case studies of aphasic 
patients who present with a classical dissociation between verbs and 
nouns, but, interestingly, in different modalities. One patient shows poor 
performance on verbs in the the oral modality, while another patient shows 
a similar deficit for verbs, but only in writing. The authors take the 
dissociation of modalities to be strong evidence in favor of a 
specifically grammatical deficit, since the underlying semantics (and all 
other non-grammatical factors such as imagibility etc.) are the same for 
both modalities, e.g. the semantic complexity of the verb to `give' is the 
same in speech and writing. Nevertheless, they don't make it clear why the 
same argument doesn't apply to the grammatical properties. In addition to 
naturally occurring pathologies, an experimental technique called 
Transcranial Magnetic Stimulation (TMS) is available to induce temporary 
`deficits' by knocking out the fuctions of the stimulated brain area. 
Using this technique, similar dissociations could be obtained in 
linguistic performance. On the basis of the patient and the TMS data, 
Caramazza and Shapiro conclude that verbs and nouns are represented in the 
brain by distinct circuits, the former probably in the superior, anterior, 
the latter in the inferior, posterior part of the left frontal cortex.

Another very common issue in neurolinguistics is the localization of 
syntax in the brain. As Yosef Grodzinsky, in his contribution "Variation 
in Broca's Region", points out, the old view according to which syntax is 
in Broca's region is untenable, because, on the one hand, Broca's region 
is also implicated in non-linguistic functions, and, on the other hand, 
syntax has been shown to recruit other brain areas, too. However, as he 
argues, presenting evidence from patient and brain imaging studies, it 
seems to be the case Broca's are is indeed responsable for the syntactic 
operation of the movement of phrasal constituents. Therefore, Grodzinsky 
concludes, the idea that language is modular at the neural level is 
supported.

8. Unifying biology and language: still a long way to go?
Beyond their empirical and theoretical merits, the papers are also of 
interest from a foundational or epistemological point of view. Implicitly 
or explicitly, they are all attempts to bring formal linguistic theory 
closer to the study of the more biological aspects of language. The idea 
of such a unification was first raised by Chomsky (e.g. 1968) and has been 
gaining popularity ever since. In some domains, such as psycholinguistics 
and neurolinguistics, progress has been considerable. Other areas, like 
the investigation of the evolution of language, have only begun to emerge 
(possibly due to the recent changes in linguistic theory brought about by 
the MP). Today, the possibility of a unifications might seem so imminent 
that the editor of the volume, Lyle Jenkins fully dedicates his 
contribution ("Unification in Biolinguistics") to this issue. 
Unfortunately, he doesn't go beyond briefly introducing some cases of 
unification from the history of the natural sciences, and some rough 
analogies between the evolution of species and the emergence of language 
diversity.

Although biolinguistics has gone a long way towards unification, so that 
many questions have turned from unsolvable puzzles into formulable 
scientific problems, to use Chomsky's terminology, a lot remains to be 
done on both sides of the gap before we completely understand the path 
from the genome to linguistic behavior in its full complexity.

REFERENCES

Chomsky, N. 1968. Language and Mind. New York: Harcourt, Brace and World.

Chomsky, N. 1995. The Minimalist Program. Cambridge, Mass.: MIT Press.

Gibson, E. and Wexler, K. 1994. "Triggers". Linguistic Inquiry 25(3):407-454.

Gleitman, L and Newport, E. 1995. "The invention of language by children". 
In: Gleitman, L. and Liberman, M (eds.): Invitation to Cognitive Science, 
Vol. 1.: Language. Cambridge, Mass.: MIT Press.

Hauser, M., Chomsky, N. and Fitch, T. 2002. "The Faculty of Language: What 
Is It, Who Has It, and How Did It Evolve?" Science 298:1569-1579.

Kayne, R. 1994. The Antisymmetry of Syntax. Cambridge, Mass.: MIT Press.

Lenneberg, E. 1967. The Biological Foundations of Language. New York: 
Wiley.

Pinker, S. 1984. Language Learnability and Language Development. 
Cambridge, Mass.: Harvard UP.

Pinker, S. and Jackendoff, R. 2005. "The Faculty of Language: What is 
Special about it?". Cognition 95(2):201-236.

Wexler, K. 1998. "Very early parameter setting and the unique checking 
constraint". Lingua 106:23-79. 

ABOUT THE REVIEWER

Judit Gervain is currently a 3rd year PhD student at the Language, 
Cognition and Development Lab, Cognitive Neuroscience Sector, SISSA, 
Trieste, Italy under the supervision of Prof. Jacques Mehler. Her first 
degree is in English Philology, French Philology and Theoretical 
Linguistics from the University of Szeged, Hungary. She wrote her MA 
theses in English Philology and in Theoretical Linguistics about focus-
raising phenomena in Hungarian. She has published several papers in this 
topic. Currently, she is working on language acquisition. Her precise 
research topic is the acquisition of the foundations of syntax in the 
first year of life. At the same time, she continues to do research in 
theoretical linguistics, in the area of left peripheral phenomena (focus 
and wh-raising etc.).





-----------------------------------------------------------
LINGUIST List: Vol-16-2233	

	



More information about the LINGUIST mailing list