Seminaire: Evelina Fedorenko et Ted Gibson, 19 octobre 2012, BLRI, Marseille

Thierry Hamon thierry.hamon at UNIV-PARIS13.FR
Fri Oct 5 19:10:58 UTC 2012


Date: Wed, 3 Oct 2012 17:01:09 +0200
From: Nadéra Bureau <nadera.bureau at lpl-aix.fr>
Message-ID: <007f01cda177$ec1e47c0$c45ad740$@bureau at lpl-aix.fr>


Brain & Language Research Institute

Vendredi 19 octobre 2012

11h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau,
Cognition)

3 place Victor Hugo, Marseille (Labex BLRI)

Evelina FEDORENKO (MIT)

Résumé What cognitive and neural mechanisms do we use to understand
language?  Since Broca's and Wernicke's seminal discoveries in the 19th
century, a broad array of brain regions have been implicated in
linguistic processing spanning frontal, temporal and parietal lobes,
both hemispheres, and subcortical and cerebellar structures.  However,
characterizing the precise contribution of these different structures to
linguistic processing has proven challenging.  In this talk I will argue
that high-level linguistic processing - including understanding
individual word meanings and combining them into more complex
structures/meanings - is accomplished by the joint engagement of two
functionally and computationally distinct brain systems.  The first is
comprised of the classic “language regions” on the lateral surfaces of
left frontal and temporal lobes that appear to be functionally
specialized for linguistic processing (e.g., Fedorenko et al., 2011;
Monti et al., 2009, 2012).  And the second is the fronto-parietal
"multiple demand" network, a set of regions that are engaged across a
wide range of cognitive demands (e.g., Duncan, 2001, 2010).  Most past
neuroimaging work on language processing has not explicitly
distinguished between these two systems, especially in the frontal
lobes, where subsets of each system reside side by side within the
region referred to as “Broca’s area” (Fedorenko et al., in press).
Using methods which surpass traditional neuroimaging methods in
sensitivity and functional resolution (Fedorenko et al., 2010;
Nieto-Castañon & Fedorenko, in press; Saxe et al., 2006), we are
beginning to characterize the important roles played by both
domain-specific and domain-general brain regions in linguistic
processing.

------------------------------------------------------------------------

Vendredi 19 octobre 2012

16h Salle des Voûtes Fédération de Recherche 3 C (Comportement, Cerveau,
Cognition)

3 place Victor Hugo, Marseille (Labex BLRI)

Ted GIBSON (MIT)

The communicative basis of word order

Résumé Some recent evidence suggests that subject-object-verb (SOV) may
be the default word order for human language.  For example, SOV is the
preferred word order in a task where participants gesture event meanings
(Goldin-Meadow et al. 2008).  Critically, SOV gesture production occurs
not only for speakers of SOV languages, but also for speakers of SVO
languages, such as English, Chinese, Spanish (Goldin-Meadow et al. 2008)
and Italian (Langus & Nespor, 2010).  The gesture-production task
therefore plausibly reflects default word order independent of native
language.  However, this leaves open the question of why there are so
many SVO languages (41.2% of languages; Dryer, 2005).  We propose that
the high percentage of SVO languages cross-linguistically is due to
communication pressures over a noisy channel (Jelinek, 1975; Brill &
Moore, 2000; Levy et al. 2009).  In particular, we propose that people
understand that the subject will tend to be produced before the object
(a near universal cross-linguistically; Greenberg, 1963).  Given this
bias, people will produce SOV word order – the word order that
Goldin-Meadow et al. show is the default – when there are cues in the
input that tell the comprehender who the subject and the object are.
But when the roles of the event participants are not disambiguated by
the verb, then the noisy channel model predicts either (i) a shift to
the SVO word order, in order to minimize the confusion between SOV and
OSV, which are minimally different; or (ii) the invention of case
marking, which can also disambiguate the roles of the event
participants.  We test the predictions of this hypothesis and provide
support for it using gesture experiments in English, Japanese and
Korean.  We also provide evidence for the noisy channel model in
language understanding in English.

-------------------------------------------------------------------------
Message diffuse par la liste Langage Naturel <LN at cines.fr>
Informations, abonnement : http://www.atala.org/article.php3?id_article=48
English version       : 
Archives                 : http://listserv.linguistlist.org/archives/ln.html
                                http://liste.cines.fr/info/ln

La liste LN est parrainee par l'ATALA (Association pour le Traitement
Automatique des Langues)
Information et adhesion  : http://www.atala.org/
-------------------------------------------------------------------------



More information about the Ln mailing list