LINGUIST LIST reveiw of Emmorey (2002)

Adam Schembri, Deaf Studies Adam.Schembri at BRISTOL.AC.UK
Wed Jun 12 17:38:09 UTC 2002


I'm cross-posting this from the Linguist List.

Cheers,
Adam

------------------------------------------------------------------------
Message 1: Language, Cognition, & the Brain: Insights from Sign
Language
Date: Thu, 9 May 2002 10:43:31 +0200
From: Zouhair Maalej <zmaalej at gnet.tn>
Subject: Language, Cognition, & the Brain: Insights from Sign Language


Research

Emmorey, Karen (2002) Language, Cognition, and the Brain: Insights
from
Sign Language Research. Lawrence Erlbaum Associates, xviii+383pp,
paperback ISBN 0-8058-3399-4 $39.95

Reviewed by Zouhair Maalej, Department of English, University of
Manouba-Tunis, Tunisia

Language, Cognition, and the Brain (LCB) includes an Introduction,
eight
chapters, an Epilogue, two appendices (one on handshapes in American
Sign
Language (ASL), another on communication forms in Nicaragua), an
impressive
bibliography (40 pages), and two indices (author and subject). True
to its
subtitle, LCB is about what sign languages can teach us about the
nature and
properties of language, the structure of cognition, and the workings
of the
brain. LCB is so rich with reports of experiments that the current
review
will appear almost reductionist.

BOOK'S PURPOSE AND CONTENTS
The main thrust of the book is given in the Preface, which is the
investigation of the key role of sign language in the study of
language,
cognition, and the brain in general. A synopsis of the chapters
contents is
also briefly spelled out.

1. Introduction
Sign language is stated to be the backdoor to language, cognition,
and the
brain. Emmorey addresses some of the fallacies about sign language,
namely
that, (i) there is a universal sign language, (ii) sign languages are
made
up of gestures and mimes, (iii) sign languages are based on speech,
(iv)
sign languages can hardly serve the same subtleties and semantic
complexity
speech serves. Emmorey also brings evidence from Nicaraguan Sign
Language
(NSL) to bear on whether sign language develops as a smooth process
or a
discontinued, abrupt one, and whether it is the child or the adult who
impacts the process, concluding that the birth of NSL is a
discontinued,
abrupt process, where the child plays a leading role. Emmorey wraps
up the
Introduction by pointing to the sociolinguistic dimensions of ASL,
namely,
geographic, gender, register variables, and the differences between
ASL and
other non-natural languages.

2. The Structure of ASL: Linguistic Universals and Modality Effects
The Structure of ASL is reviewed at the levels of morphology,
phonology,
syntax, and discourse. At the level of morphology, ASL is said to
differ
from spoken language (SpL) by being noncancatenative. However, like
for SpL,
ASL word formation is guided by aspectual inflections, compounding,
and
derivation. The ASL lexicon is characterised more by iconicity than by
arbitrariness, owing to the richness of the visual-gestural modality
as
against its auditory-vocal counterpart. The lexicon also includes
English
and foreign language borrowings and classifier predicates, which are
fingerspelled and in violation of formational constraints of the
lexicon,
respectively.

Although it seems paradoxical to talk of a phonological level for
"soundless" languages, ASL is argued to comprise three basic
phonological
categories: handshape or hand configuration (i.e. whether the palm is
open,
fist closed, or fist closed with the index finger pointing, etc.),
location
or place of articulation (i.e. whether the sign is made at the upper
brow,
the cheek, the upper arm, and horizontally or vertically, etc.), and
movement (i.e. whether the path is straight or arced, and whether
there is
local movement, etc.). To ascertain a linear segmental structure for
signs,
location, movement, and handshape are likened to consonant, vowel,
and tone,
respectively. Handshape change is found to be coordinated with sign
syllable
rather than with the sign as a whole. Through a discussion of
compounding,
it was shown that the feature geometry rule found in SpL also applies
to
ASL. Spoken language and signed language are contrasted with regard to
speech articulators, with SpL having one (i.e. the tongue) and signed
language two articulators (i.e. the two hands, which either satisfy
the
symmetry condition or the dominance condition). Non-manual expression
(such
as mouthings and mouth gestures) is said to correspond to prosodic
features
in SpL.

At the syntactic level, ASL shows a SVO word order, whereby words may
be
topicalised and subjected to the same syntactic constraints as SpL.
ASL also
shows the grammatical dimension of nonmanual signs such as facial
behaviour
((eye)brow patterns, lip patterns, mouth gestures, etc.). Pronominal
signs
are said to be compositional, and work by associating referents with a
location in signing space when they are physically absent. But such
pointing
in signing space may give rise to ambiguity as to whether reference
is to
the entity in space or the location of the entity, which ambiguity is
said
to be resolved by reference to environing discourse.

At the level of discourse, turn-taking in ASL is regulated through
eye gaze,
but gestures in signing space are also used as attention-getting
devices for
a turn (hand raising, head nodding, etc.). Refusal to relinquish the
floor
is signalled through an upturned palm. Since the modality is visual,
overlap
between two participants may occur. Narrating in ASL marks reporting
through
"referential shifts" or "role shifts" (breaks in eye gaze with the
addressee, possible shifts in head and body position, and changes in
facial
expression). Emmorey (1999b) argues that signers also produce
"component
gestures" (i.e. communicative gestures that are embedded within the
utterance) and "deictic gestures," which alternate with signs, thus
indicating that gesture and sign are not co-expressive.

3. The Confluence of Language and Space
This chapter is focused on how classifier constructions (CCs) are
used in
ASL to represent physical space and conceptual structure. CC express
motion
(The car meandered up a hill), position (The bicycle is next to the
tree),
stative-descriptive information (It's long and thin), and handling
information (I picked up a spherical object). CCs in ASL are said to
include
(i) whole entity classifiers, (ii) handling and entity classifiers,
(iii)
limb classifiers, and (iv) extension classifier for handshapes. Whole
entity
classifiers can combine with movements such as position morphemes,
motion
morphemes, manner morphemes, and extension morphemes. Handling and
entity
classifiers can combine with some position, motion, and manner
morphemes.
Limb classifiers can only combine with manner morphemes. Extension
classifier handshapes can only combine with extension movement
morphemes.

As CCs received ample morphological and semantic treatment, Emmorey
proposes
to study them from syntactic and phonological perspectives. To
address the
question of whether CCs in ASL have gradient or categorical
properties,
Emmorey found that, unlike SpL which expresses spatial information
categorically via prepositions and locatives, ASL expresses spatial
information via categorical, gradient, and analogue representations.
Another
possibility for ASL is to combine two classifier predicates to express
simultaneous constructions. ASL signers are said to adopt intrinsic,
relative, and absolute frames of reference within signing space, with
the
possibility  for intrinsic and relative frames to be expressed
simultaneously. Emmorey (1999a) reports on evidence whereby the
right-hemisphere is involved in processing spatial information.

Time lines are based on the FUTURE IS AHEAD, PAST IS BEHIND, and
PRESENT IS
AT THE BODY conceptual metaphors. Deictic time lines (next, tomorrow,
yesterday) are signed with reference to movement away from the
signer's
body, drawing mappings between signing space and temporal structure.
Such
times refer to points in time related to current discourse. Anaphoric
time
lines proceed diagonally, and are used in comparisons or contrasts of
time
periods within discourse. Sequence time lines proceed from left to
right,
and refer to the order of events in time. To encode other abstract
concepts,
signers are said to have recourse to two forms of mapping:
abstract-to-concrete and iconic-to-linguistic. Such mappings are
corroborated by Wilcox (forth. 2002).

4. Psycholinguistic Studies of Sign Perception, Online Processing, and
Production
One aim of this chapter is to investigate the amount of universalism
and
specificity in language processing and production. Signers are said
to show
categorical perception for hand configuration, which suggests that
language
experience affects perceptual categorization. Lexical access has been
shown
to be faster in ASL than for spoken English, owing to early
recognition of
simultaneous (versus serial representation for spoken English) hand
configuration, place of articulation, and movement, and the scarcity
of
signs sharing initial hand configuration and place of articulation.
In terms
of lexical organisation, both ASL and SpL have been shown to follow a
spreading activation model of recognition, whereby the decision time
for the
recognition of a lexical item is lower if semantic associates are
presented
(e. PENCIL as presented after PAPER) and vice versa. Online
processing of
ASL has been found to be similar to that in SpL, but with sign
languages
showing how spatial location can affect processing and representation
in
memory.

Owing to its visual-manual modality, sign production can offer two
simultaneous constructions distinctly and dominance reversal
constructions
by right-handed signers with their left hands and vice versa (for
backgrounding parenthetical information or foregrounding contrasts and
comparisons). Sign production in ASL is not immune from
prearticulatory and
postarticulatory editing (a slight hand wave, palm outward,
headshake),
whose indices are self-interruptions and self-repairs. Pauses as
indication
of filled pauses like "ums" and "uhs" have also been noted among
signers.
Because signers do not see and hear themselves while signing, such
pauses
and editings may be substantially different from that for speakers
who can
hear themselves. Whispering (when others are discarded from one's
talk by
displacing it aside) and shouting (in crowded places or when the
hearer is
in a distal location) are common in sign languages.

Lexical retrieval suggests that semantic information is independent of
phonological information. Slips of the hand in ASL may come from
errors in
hand configuration, place of articulation, or movement. The fact that
stranding errors are rare in ASL is accounted for by the
nonconcatenative
nature of its morphological processes. The early repair of errors may
also
be a result of the same processes and the comparative slowness of sign
articulation. The production of gestures among signers is also rare
owing to
the fact that both hands are usually busy; instead, contemporaneous
body and
facial gestures with signing are more common, and seem to have a
communicative (rather than metaphoric and facilitative) function.

5. Sign Language Acquisition: Early development
Evidence suggests that speech and sign acquisitions know the same
regularity
and developmental stages, suggesting that neural mechanisms are not
modality-specific. Manual babbling (appearing between 10 and 14
months) is
syllabic, and tends to be reduplicative. Signs are produced earlier
than
words owing to the development of the motor system and visual cortex
as
against the vocal tract and the auditory cortex. What seems to cause
problems for baby talk is getting the right hand configuration.
Motherese
appears to exist in sign language, and to perform the same function
as for
SpL. However, because the same modality is involved when looking at
adults
signing to babies and attending to the spatial context in which
objects are
presented, it might be thought that language development is impeded
by its
very visual modality. Mothers are said to attune their talk to the
child's
communicative development to avoid these setbacks. The acquisition of
syntax
and morphology is almost the same in speaking and signing children.
With
very few exceptions, overwhelming evidence suggests that iconicity
does not
facilitate acquisition among young signers.

5 (cont'd). Sign Language Acquisition: Later development
Fingerspelling is said to be understood and acquired by signing
children
even when it is rare in the linguistic input. Not until the age of
five (or
even six) do young signers manage to conquer signing space, which is
an
error-ridden and arduous task for them. Such late and arduous
acquisition is
due to problems relating to spatial memory, i.e. the difficulty for
the
young signer to remember "multiple referent-location associations"
(p. 192).
Further evidence that iconicity does not facilitate acquisition among
young
signers even in later cognitive development comes from the late
acquisition
(between 8 and 9) of classifier constructions owing to the
complications in
terms of handshape, movement, and location they pose to the young
signer.
Narrative development as signalled by referential shifts is a crowning
development. Young signers are said to fail to signal through body
gesturing
and eye gaze the perspective from which the narrative is told, which
suggests the hand-before-face development. And they seem to master
referential shifts for quoting before constructed narrative.

6. The Critical Period Hypothesis and the Effects of Late Language
Acquisition
Early and late signers are said to differ in their spending cognitive
effort
solving (a) semantic and (b) phonological details, respectively. Late
signers suffer also from morphological and discursive impairments,
although
some of their processing mechanisms for word order and coreference
remain
intact. Comparing ASL late acquisition and second language
acquisition,
Emmorey argues that "the long-term effects of a delay in first
language
acquisition appear to be much more detrimental than the effects of
acquiring
a second language late in childhood" (p. 217), the reason being that
the
learner of a second language has already the linguistic knowledge
acquired
with first language acquisition. However, deaf children not exposed
to sign
language develop home sign gesture systems (mostly iconic and
ergative in
nature), which have been observed to show regularity across children
from
different cultures.

Delayed acquisition is said to be followed or accompanied by cognitive
delays rather than deficits. In general, deaf children with deaf
parents
have been observed to outperform deaf children with hearing parents.
In
particular, a lack of early language experience is said to affect
short-term
memory capacity (recall), which affection fades by adulthood. Deaf
children
with hearing parents are said to have problems with the development
of the
theory of mind, i.e. the capacity to attribute mental states (such as
beliefs, intentions, desires, emotions, etc.) to people. Causes for
such a
delay relate to the absence from the child's input of conversation and
syntax, which are thought to favour the development of mental states.

7. Memory for Sign Language: Implications for the Structure of Working
Memory
Evidence from SpL and ASL suggest that recall in working memory has a
phonological rather than semantic bias. Recall of words with different
meanings seems to be enhanced while recall of words with similar
meanings
seems to be impeded. Working memory span in signers is said to be
shorter
than that in speakers owing to the fact that signs take longer to
articulate
than words. Spatial coding can be used as a memory device: signs with
a
spatial neural basis are easier to memorise than bodily-based signs.
Signers
perform equally well in remembering signed numbers forward and
backward,
suggesting that signers' working memory is not unidirectional as that
of
non-signers. As summed up by Emmorey (p. 240), "speech-based working
memory
appears to excel at using time to code serial order, whereas
sign-based
working memory is able to use space to code serial order."

8. The Impact of Sign Language Use on Visuospatial Cognition
Regarding motion processing, signers are said to be more expert than
non-signers at detecting motion, which suggests that this capacity is
enhanced among signers as a compensation for auditory deprivation.
Neuroimaging shows that motion perception is left hemisphere-based.
With
regard to face processing, signers are said to outperform non-signers
in
face perception, and ASL signers are said to have enhanced ability to
identify emotional facial expressions. Concerning mental imagery,
signers
are said to have enhanced performance on mental rotation tasks owing
to
their experience with ASL. Signers have also been found to have a
broader
spatial memory span as compared to non-signers. Emmorey (p. 267)
argues that
there are capacities (motion discrimination thresholds, face
recognition
ability, maintenance of visual images, localization skill,
visuospatial
constructive abilities, etc.) in which signers and non-signers do not
show
differences, which suggests that these capacities do not depend on
sign
language use but relate to general cognition.

9. Sign Language and the Brain
Signers with left-hemisphere damage were observed to perform worse
than
signers with right-hemisphere damage, which ascertains the location
of sign
language in the left hemisphere and the importance of it in signs
comprehension and production. With the exception of the puzzling
study by
Neville et al (1998), many experiments having to do with brain
activity
corroborate this finding: cortical stimulation of the left hemisphere
in the
surgical treatment for seizures, the Wada Test injection, visual
hemifield
presentation, dual task paradigm (for both signers and non-signers),
event-related brain potentials, positron emission tomography,
functional
magnetic resonance imaging, etc.

To further ascertain the specialisation of the left hemisphere in
language
comprehension and production, Emmorey mentions many other experiments:
dissociating sign language from symbolic gesture (some sign
impairments
versus well-preserved gesture or pantomime), dissociating neural
control for
motor versus linguistic processes (capacity to copy meaningless
movement
sequences versus linguistic impairment), dissociating sign language
ability
from non-linguistic spatial cognitive ability (right hemisphere damage
resulting in non-linguistic spatial cognitive deficit of, e.g., the
visuospatial abilities, but not in sign language aphasia), which
should not
entail that the left hemisphere, as Emmorey rightly points out, is
aspatial.

The neuroanatomy of sign language production is thought to be the
same as
that for SpL: the Broca area. The neuroanatomy of sign language
comprehension depends on the temporal lobe independently of the
modality,
with the superior temporal gyrus undergoing functional reorganization
in
signers to receive visual sign input instead of auditory sign input.
The
neuroanatomy for the production and perception of emotional and
linguistic
facial expressions seems to involve the right left hemisphere. In the
same
way the left hemisphere is not aspatial, the right hemisphere is not
alinguistic. The right hemisphere is said to play lexical (when words
are
imageable or have concrete referents) and discursive roles (when
coreference
is misused). Damage to the right hemisphere is said to cause
disruption to
the organisation of signing space.

CRITICAL REVIEW
The book is a comprehensive document about how ASL is used by signers
to
function in space and mean. Findings in ASL have been evidenced with
experiments mainly done by the author herself, with associates, or by
other
researchers. The book is also sprinkled with cross-linguistic
references to,
and evidence from, British, Chinese, German, Japanese, Jordanian,
Swedish,
etc. sign languages to corroborate ASL. Each chapter of the book
includes a
short concluding section (and sometimes implications), wrapping up
the main
findings. The book is well indicated for students of SpL and sign
language
alike, and a landmark for researchers in sign language at large and
especially its neuropsychology.

But beyond its descriptive (mainly) and argumentative dimensions, the
book
contributes insights for cognition and language at large:

(i) Linguistic prewiring principle
Evidence from sign languages show that constraints on syntactic form
(e.g.
structure dependency) are universal across modalities (hearing and
visual),
which suggests that "we are born with the ability to acquire
language" (p.
204). This is a confirmation for what is known as the innateness
hypothesis.

(ii) The poverty-of-the-stimulus principle
Evidence from sign and SpL, whereby the role of the input in language
acquisition is restricted to a minimum intervention from adults,
constitute
confirmation of the barley-distillery-whiskey metaphor for Chomsky's
"poverty of the stimulus hypothesis." As Emmorey pointed out (p. 8),
"over
90% of Deaf children are born to parents who do not know sign
language," and
yet all of them come with some form of sign language to communicate
with.

(iii) The child contribution principle and the critical period
hypothesis
Evidence from sign language corroborate the fact that late language
acquisition not only impacts the level of mastery of grammar but also
affects language comprehension. Early native signers have been
observed to
outperform their late native counterparts. However, home-signing among
children reared in an environment without signs input suggests the
intervention of the child to develop a transitory system of
communication
even in the presence of a very impoverished adult input. The fact
that both
speaking and signing children subscribe to a critical period of
language
acquisition is evidence that the process is neural, and is, therefore,
modality-independent.

(iv) Design features of language: arbitrariness and/or iconicity
Evidence from sign languages suggest that, although iconicity does not
appear to play a facilitative role in early acquisition and in
adulthood,
arbitrariness of word forms does not seem to be "a basic or necessary
characteristic of human language" either, and that it is "the
articulatory
and perceptual resources of spoken languages [that] limit the
iconicity of
spoken words" (p. 17).

(v) The uniformity of language acquisition principle
Evidence from both spoken and signed languages point to the uniformity
requirement (i.e. the fact that all children within the same culture
and
across different cultures go through more or less the same processes
acquire
a language). However, differences in uniformity between spoken and
signed
languages are due to "linguistic and cognitive principles that are not
specific to the visual-manual or aural-oral modalities" (p. 204).

(vi) Right versus left hemisphere specialisation
Evidence from sign language shows the involvement of the left
hemisphere in
motion perception. A parallel study of gestures and hemispheres by
Feyereisen (1999: 8) indicates that meaningful gestures activate the
left
hemisphere whereas meaningless ones activate regions in the right
hemisphere, although "the systems for gesture imitation and
recognition are
not independent." Motion that is linguistic is meaningful, therefore
activating the left hemisphere. Motion that is linguistically
irrelevant for
signing is relegated to the right hemisphere. This division of labour
between hemispheres, whereby the left hemisphere specialises in
linguistic
matters and the right hemisphere in non-linguistic matters, seems to
be
confirmed by sign language research, although studies of neural
disorders by
Corina (1999: 29) suggest that deficits are the result of "lesions
more
posterior than those observed in users of spoken language" in the left
hemisphere.

A point needs to be made concerning the attribution to ASL of merits
such as
the capacity to identify facial expressions, performing well in mental
rotation, the ability to generate mental images, etc. Although
Emmorey made
it clear in many places what the advantages of using a sign language
are,
perhaps, as Emmorey pointed out in the Epilogue, comparative studies
with
ASL could be used to tell us whether such performances of ASL signers
are
owed to using a sign language at large or depend on using ASL in
particular.

Finally, a very small set of typos should be signalled: to chose
(instead
of: to choose)(p. 83), more that (instead of: more than)(p. 174), do
not to
expose (instead of: do not want to expose)(p. 206), may decline due
the
process (instead of: may decline due to the process)(p. 218).

REFERENCES
Corina, David P. (1999). "Neural Disorders of Language and Movements:
Evidence from American Sign Language." In: Lynn Messing & Ruth
Campbell
(eds.), Gesture, Speech, and Sign. Oxford/New York: Oxford University
Press,
27-43.

Emmorey, Karen (1999a). "The Confluence of Space and Language in
Signed
Languages." In: Paul Bloom, Mary A. Peterson, Lynn Nadel & Merrill F.
Garrett (eds.), _Language and Space_. Cambridge/London: The MIT Press,
171-209.

Emmorey, Karen (1999b). "Do Signers Gesture?" In: Lynn S. Messing &
Ruth
Campbell (eds.), Gesture, Speech, and Sign. Oxford/New York: Oxford
University Press, 133-159.

Feyereisen, Pierre (1999). "Neuropsychology of Communicative
Movements." In:
Lynn S. Messing & Ruth Campbell (eds.), Gesture, Speech, and Sign.
Oxford/New York: Oxford University Press, 3-25.

Neville, H., D. Bavelier, D. Corina, J. Rauschecker, A. Karni, A.
Lalwani,
A. Braun, V. Clark, P. Jezzard & R. Turner (1998). "Cerebral
Organization
for Language in Deaf and Hearing Subjects: Biological Constraints and
Effects of Experience." Proceedings of the National Academy of
Science 95,
922-929.

Wilcox, Sherman (forth. 2002). "The Iconic Mapping of Space and Time
in Sign
Languages." In: Liliana Albertazzi (ed.), Unfolding Perceptual
Continua.
Amsterdam: John Benjamins Publishing Company.

ABOUT THE REVIEWER
The reviewer is an assistant professor of linguistics. His main
interests
include cognitive linguistics, metaphor, pragmatics, cognition-culture
interface, pragmatics-cognition interface, neuropsychology,
psycholinguistics, critical discourse analysis, stylistics, sign
language
and gesture, etc.

----------------------
Adam Schembri
Centre for Deaf Studies
University of Bristol
8 Woodland Rd
Bristol BS8 1TN
United Kingdom
Telephone: +44 (0)117 954 6909
Textphone: +44 (0)117 954 6920
Fax: +44 (0)117 954 6921
Email: Adam.Schembri at bristol.ac.uk
Website: www.bris.ac.uk/Depts/DeafStudies



More information about the Slling-l mailing list