Labex-EFL/ S=?ISO-8859-1?Q?=E9minaires_?=Janvier 2013/ J.Fodor et I.Sekerina: dates, lieux, titres
jacqueline vaissiere
jacqueline.vaissiere at UNIV-PARIS3.FR
Fri Dec 21 18:55:29 UTC 2012
Dans le cadre des séminaires du Labex-EFL 2013, veuillez noter que les
prochaines sessions seront assurées, en janvier 2013, par les deux
professeurs invités suivants:
*
*
*
*
- *Janet Fodor*
*Distinguished Professor, Graduate Center, City University of New York*
*
*
*07/01/13, 14h-16h*
*Lieu: Université Paris 7, 175 rue du Chevaleret, 75013, salle 4C92*
*Modeling Language Acquisition: From Math to Biology (Joint work with
William Sakas)*
The modern study of language learnability was founded in the 1960’s
following Chomsky and Miller’s remarkable collection of papers on the
formal properties of natural language, including mathematical modeling of
the relations between properties of languages and properties of the
grammars that generate them. The learnability studies that ensued,
including the groundbreaking work by Gold (1967), were mathematical also,
and hence untrammeled by any demands of psychological plausibility. For
instance, under a psychological interpretation of Gold’s learning algorithm
the grammar hypothesized by a learner in response to linguistic input might
bear no relation to the properties of that input. By contrast, we expect
that when children change their grammar hypotheses in response to a novel
input sentence, the properties of that sentence guide them toward a new
grammar which could incorporate it. It would be odd indeed (cause for
concern!) if a child shifted from one grammar to an arbitrarily different
one, just because the former proved inadequate.
For learnability theory to become a contributing partner in the modeling of
real-life language learning, it therefore had to incorporate more realistic
assumptions about languages and language learners. The first noteworthy
study in this tradition was by Wexler & Culicover (1980). Their aim was to
show that it was possible for a linguistically authentic grammar to be
acquired by computational operations of a reasonably modest complexity, on
the basis of a language sample consonant with the language that toddlers
are exposed to. In other words, a learning model for natural language was
to be developed which was computationally sound, but also psychologically
feasible and in tune with current linguistic theory.
No sooner had W&C completed their learnability proof than the linguistic
theory that it presupposed (Chomsky’s Extended Standard Theory) was
abandoned. In 1981Chomsky introduced his new Government-Binding (GB)
theory, which had universal principles, and a finite collection of
parameters as the means to codify cross-language syntactic differences.
Chomsky was not explicit about the mechanism for setting the parameters,
but he evidently regarded it as more or less automatic, instantaneous and
computation-free, offering an explanation for how children acquire a rich
target language in just a few years. Nevertheless, a decade passed before
any parameter-setting model was computationally implemented. Moreover, the
models then developed all departed from Chomsky’s original conception of
triggering, by portraying learning as a process of testing out whole
grammars until one is found that fits the facts of the target language.
Instead of n isolated observations needed to trigger n individual
parameters, these approaches require the learner to search through the vast
domain of 2n grammars.
In our work we have tried to hold tight to the notion of input guidance
embodied in Chomsky’s original concept. We maintain that an input sentence
not yet licensed by the learner’s grammar can *reveal* to the learning
mechanism, without resort to extensive testing or trial-and-error, which
parameter settings could license it. In the ‘Structural Triggers’ framework
that we have developed, parameter values are taken to be UG-specified
I-language entities in the form of ‘treelets’ (sub-structures of sentential
trees), e.g., a PP node immediately dominating a preposition and a nominal
trace, which would establish the positive value of the
preposition-stranding parameter. Because treelets are structured entities,
they aren’t directly observable in a learner’s primary linguistic data.
Also, they may contain phonologically empty categories, such as the trace
in the stranded-preposition treelet. What is observable by a learner is
only the ‘E-trigger’ for that parameter value, e.g., a sentence with a
preposition lacking an overt object. On hearing such a sentence, the
learner must somehow recognize it as a manifestation of the abstractly
specified I-trigger. In a structural triggers learning model (STL), this is
achieved by the learner’s sentence processing mechanism (‘the parser’),
assumed to be innate. The parser processes sentences in the usual fashion,
applying the learner’s currently best grammar hypothesis, and upgrading it
on-line if and where it finds that a new treelet is needed in order to
parse an incoming sentence. In this respect, and only this, a child’s
processing of sentences differs from an adult’s. This process of patching
gaps in parse trees makes maximum use of the information that the input
contains, and with the least wasted effort since presumably a child’s
primary goal on hearing a sentence is to parse and understand it, as part
of normal social interaction (What is Mommy saying to me?). Importantly,
the parsing mechanism is what provides the link – notably missing in the
original switch-setting metaphor – between the E-language facts available
to the child and the I-language knowledge that is acquired. This concept of
grammar acquisition as continuous with sentence processing fits well with
the recent shift of emphasis to a conception of the language faculty as a
biological organ, functioning in concert with other perceptual and
conceptual systems.
*09/01/13, 16h-18h*
*Lieu, ILPGA, 19 rue des bernardins, 75005, salle Mac*
*
*
*Center-embedded Sentences: A Phonological Challenge (Joint research with
Stefanie Nickels, McGill University)*
Many explanations have been proposed for the extreme processing difficulty
of doubly center-embedded relative clause (2CE-RC) constructions. We offer
a phonological explanation. We maintain (i) that a sentence cannot be
easily parsed and comprehended if it cannot be assigned a supportive
prosodic contour, and (ii) that the flat structure of prosodic phrasing
corresponds very poorly to the strongly hierarchical syntactic tree
structure of 2CE-RC sentences.
Sentence (1) is a commonly cited example in the literature. (2)
is from the experimental materials of Gibson & Thomas (1999). Both are
typically pronounced awkwardly, with ‘list intonation’. Assigning the
requisite nested syntactic structure is so difficult that such sentences
are often judged more grammatical when the second VP is (ungrammatically!)
omitted, as in (3). This is the ‘missing VP illusion’.
(1) The boy the dog the cat scratched bit died.
(2) The ancient manuscript that the grad student who the new card catalog
had confused
a great deal was studying in the library was missing a page.
(3) *The ancient manuscript that the grad student who the new card catalog
had
confused a great deal was missing a page.
However, we find that the correct nested structure [NP1 [NP2 [NP3 VP1] VP2
] VP3] can be facilitated by a prosodic phrasing which packages the center
constituents (NP2 NP3 VP1 VP2) up together. Because there are length limits
on prosodic phrases, this is feasible only if the center constituents are
all short, and the outer constituents (NP1 and NP2) are each long enough to
constitute a separate prosodic phrase. This is the case in (4). Both
intuitively and as confirmed by our experimental data, examples like (4)
are easier to pronounce and to understand than examples like (5), which has
the same overall sentence length but has its weight in the wrong places:
skinny outer constituents and fat inner ones.
(4) The rusty old ceiling pipes that the plumber my dad trained fixed
continue to leak
occasionally.
(5) The pipes that the unlicensed plumber the new janitor reluctantly
assisted tried to
repair burst.
It appears that the critical factor is avoidance of a prosody in which VP2
(tried to repair in (5)) breaks away as a separate prosodic phrase on its
own. We believe the explanation for this is that syntax-prosody alignment
is achieved by *syntactic* readjustment (Wagner 2010, Chomsky & Halle 1968,
contra Selkirk 2000), creating a flatter tree structure. A prosodically
separated VP2 would require (string-vacuous) extraposition of VP2 out of
the relative clause it is in, but extraposition of a finite VP is illicit
syntactically.
*14/01/13, 14h-16h*
*Lieu: Université Paris 7, 175 rue du Chevaleret, 75013, salle 4C92*
*A New Mis-fit Between Grammars and Parser's *
In early days of generative grammar, the study of sentence processing
suffered a disappointment: the failure of the Derivational Theory of
Complexity (DTC). The number of transformational rules applied in the
syntactic derivation of a sentence did not predict how difficult it was to
parse. Worse still, it proved impossible to reverse the transformational
component so that it would derive the correct deep structure (hence
meaning) from a surface word string. Reactions to this mismatch between the
linguists’ grammar and the needs of sentence processing were varied. Some
linguists (Bresnan; Gazdar et al.) responded by abandoning transformational
rules, turning to grammar formalisms that deliver monostratal derivations
that could run equally well in any direction. Efficiently functioning
parsers for several languages have been developed in these frameworks over
the years; see Abeillé (1988).
Within the Transformational Grammar (TG) framework, psycholinguists
responded to the DTC by abandoning the aim of using the grammar as a set of
directions for processing. Instead (Wanner & Kaplan; Fodor) a surface
string was processed as if it were a deep structure, until that failed due
to a constituent not in its underlying position (a ‘filler’ such as a
wh-phrase) or an obligatory constituent absent from the word string (a
‘gap’ or trace of movement); then the parser paired up fillers and gaps
guided by some (ill-defined) consultation with grammatical constraints.
Many experiments filled in details (anticipation of gaps, sensitivity to
islands, etc.) but this general approach remained essentially unchanged
throughout the period of Government-Binding Theory.
With the advent of the Minimalist Program (MP) in the 1990s the situation
worsened. One improvement from a psycholinguistic perspective was that
syntactic derivations became monostratal, with movement (now copy-merge)
interleaved with structure building operations. But movement and structure
building were now both misaligned with sentence processing. This is because
MP derivations inherently operate bottom-up, and at least in
right-branching languages this means from right to left: starting at the
end of a sentence. Again, a flurry of reactions to the problem. Fong (2005)
created a filler-gap parser that computes MP structural representations but
without direct use of an MP grammar. Chesi (2007) has reconfigured MP
grammars so that they generate sentences top-down, left-to-right. Neeleman
& van de Koot (2010) step away from the grammar/parser mis-fit by positing
that both represent the same facts but at two different Marr-like levels of
abstraction.
>From the point of view of a working psycholinguist, I propose instead that
the parser would build MP trees from interlocking chunks of tree structure,
each chunk being the largest substructure introduced by one word of a
sentence. Where do the chunks come from? The MP grammar generates complete
sentential trees, which are then chopped up into these parser-friendly
building blocks. One major problem remains, however. The tree chunks need
to be informatively tagged so that they can be correctly stitched together
by the parser, e.g., combining a chunk containing a filler with a chunk
containing a gap. This is the case in other theories, such as HPSG; but in
the Minimalist Program no node labels (except lexical ones) are permitted.
*16/01/13, 16h-18h*
*Lieu, ILPGA, 19 rue des bernardins, 75005, salle Mac*
*Disambiguating Triggers for Syntactic Parameters (Joint work with William
Sakas)*
Models of child language acquisition vary widely. Some presuppose innate
guidance that permits rapid and largely error-free learning of syntax,
while others assume statistical analysis of the linguistic input and/or
trial-and-error procedures of various kinds. What type of model is capable
of matching the achievements of child learners depends in large part on how
ambiguous the input is with respect to the triggers (cues) for setting
parameters. Ideally, each (non-default) parameter value would have an
unambiguous trigger, a type of sentence uniquely compatible with that
parameter setting. In reality, this seems unlikely to be true for the
natural language domain. A simple example often noted: SVO word order is
compatible with V2 (verb-second) grammars and with non-V2 grammars.
We cannot – for obvious reasons – establish the extent of between-parameter
ambiguity in the entire domain of natural languages: many of the languages
in that domain are as yet unknown and unanalyzed. But we can approach the
question by examining a constructed domain of languages, as much like
natural languages as possible, but whose syntactic parameters and
structural properties are all precisely specified. In our project, we
created a domain of 3,072 languages characterized by 13 familiar syntactic
parameters (head direction, null subject, etc.). We discovered unambiguous
triggers for all of the non-default parameter values in all of the
languages. However, in order to achieve this we had to posit a tool-kit of
between-parameter priority relations to disambiguate triggers that
otherwise would have been parametrically ambiguous. (This is similar to the
findings of Dresher & Kaye, 1990, for phonological parameters.) The
possible source of the priority relations then becomes a question of
interest.
- *Irina Sekerina*
*
*
*Associate Professor of Psychology, College of Staten Island*
*Thème: “Visual World Eye-Tracking: A Lifespan Perspective*
*Dates :*
*07/01/13, 16h-18h*
*Lieu: Université paris 7, 175 rue du Chevaleret, 75013, salle 4C92*
*"Introduction:* the eye and eye movements, types of eye-trackers, eye
movements in reading, Eye movements in speech: The visual World
Eye-Tracking Paradigm (VWP), Practicalities of the VWP, Topics in the VWP
research, The VWP Examaple: Spoken-word recognition"
*14/01/13, 16h-18h*
*Lieu: Université Paris 7, 175 rue du Chevaleret, 75013, salle 4C92*
*"Children:* Development of eye-movement control in children, The
Looking-While-Listening Paradigm, The VWP and development of processing
strategies in children, Topics in the VWP research with children, The VWP
and SLI children"
*21/01/13, 16h-18h*
*Lieu: Université paris 7, 175 rue du Chevaleret, 75013, salle 4C92*
*"People with Aphasia*: Spoken word recognition in aphasia, Processing of
sentences with synthetic dependencies in aphasia"
*28/01/13, 16h-18h*
*Lieu: Université Paris 7, 175 rue du Chevaleret, 75013, salle 4C92*
*"Bilinguals:* Spoken word recognition in bilinguals (Russian-English,
Dutch-English, French-English), Processing of grammatical gender in
bilinguals (Spanish-English), Contrast and prosody in bilinguals
(Russian-English)"
Détail des séminaires en pj.
Ce message est également l'occasion de vous souhaiter de joyeuses fêtes de
fin d'année !
Bien cordialement,
Arnaud Delimoges
Labex-EFL project manager
arnaud.delimoges at univ-paris3.fr
Calendrier du Labex-EFL:
https://docs.google.com/spreadsheet/ccc?key=0Av5GUZ0wOWaJdEpTbnVyc093RmZBNlVadTlRdlJGU3c#gid=1
--
Prof. Jacqueline Vaissière
Membre Senior, Institut Universitaire de France
Laboratoire de Phonétique et de Phonologie (LPP), UMR7018 (
http://lpp.univ-paris3.fr)
Laboratoire d'excellence Empirical Foundations of Linguistics (EFL),
Sorbonne Paris Cité
Université Sorbonne Nouvelle et CNRS
ILPGA, 19 rue des Bernardins, 75005 Paris
tel: 06 15 93 94 71 (01 43 26 57 17: gestionnaire du laboratoire)
http://www.personnels.univ-paris3.fr/users/vaissier/pub/ARTICLES/index.htm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/parislinguists/attachments/20121221/990ffa24/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 2013 LabEx VWP Seminar SEKERINA.pdf
Type: application/pdf
Size: 92230 bytes
Desc: not available
URL: <http://listserv.linguistlist.org/pipermail/parislinguists/attachments/20121221/990ffa24/attachment.pdf>
More information about the Parislinguists
mailing list