Seminaire: 3 exposes de Mark Jonhson, Paris
Thierry Hamon
thierry.hamon at UNIV-PARIS13.FR
Wed Sep 11 20:24:41 UTC 2013
Date: Mon, 9 Sep 2013 18:12:17 +0200
From: Pascal Amsili <Pascal.Amsili at linguist.univ-paris-diderot.fr>
Message-ID: <20130909161217.GH670 at Marine.local>
X-url: http://www.cognition.ens.fr/ColloquiumIEC.html
X-url: http://www.linguist.univ-paris-diderot.fr/linglunch.html
Bonjour,
De passage à Paris, Mark Jonhson va donner 3 exposés dans les semaines
qui viennent:
10th Sept, ENS, Institut d'Etude de la Cognition
12h: Synergies in Language Acquisition
12th Sept, LingLunch, Université Paris Diderot
12h: Language acquisition as statistical inference
20th Sept, séminaire Alpage, University Paris Diderot
11h: Grammars and Topic Models
Résumés et détails pratiques sont donnés ci-dessous.
Cordialement,
P. Amsili
----------------------------------------------------------------------
*
*
*Synergies in Language Acquisition**
**
**Mark Johnson**
**Macquarie University**
**
**noon, 10th September**, ENS*
ENS, Colloquium de L'institut d'Etude de la Cognition
12h à 13h30,
salle Langevin, 29 rue d'Ulm, 75005 Paris
http://www.cognition.ens.fr/ColloquiumIEC.html
Each human language contains an unbounded number of different sentences.
How can something so large and complex possibly be learnt? Over the
past two decades we've learned how to define probability distributions
over grammars and the linguistic structures they generate, making it
possible to define statistical models that learn regularities of complex
linguistic structures. Bayesian approaches are particularly attractive
because they can exploit "prior" (e.g., innate) knowledge as well as
learn statistical generalizations from the input. Here we use
computational models to investigate "synergies" in language acquisition,
where a "joint model" is capable of solving "chicken-and-egg" problems
that are challenging for conventional "staged learning" models.
*
*
*Language acquisition as statistical inference**
**
**Mark Johnson**
**Macquarie University**
**
**noon, 12th September, LingLunch*
Linglunch Paris Diderot
Thursday, 12th septembre 2013
12h-13h, salle 103
bâtiment Olympe de Gouges
(8) rue Albert Einstein, 75013
http://www.linguist.univ-paris-diderot.fr/linglunch.html
This talk argues that language acquisition -- in particular, syntactic
parameter setting -- is profitably viewed as a statistical inference
problem. I discuss some issues associated with statistical inference
that linguists might be concerned about, including the possibility of
"Zombie" parameter settings. The bulk of the talk focuses on estimating
parameters in a Stabler-style Minimalist Grammar framework. Building on
recent results of Hunter and Dyer (2013), we show how estimating weights
associated with lexical entries -- including the empty functional
categories that control parametric syntactic variation -- can be reduced
to estimating weights in what appears to be a new grammar formalism
called "feature-weighted context-free grammars", which is a MaxEnt
generalisation of the "tied context-free grammars" of Headden et al
(2009). Importantly, the partition function and its derivatives of a
feature-weighted context-free grammar can be calculated using a
generalisation inspired by the Inside-Outside algorithm of the
algorithms for calculating partition functions in Nederhof and Satta
(2009). We show how this can be used to learn lexical entries and verb
movement and XP movement parameters in three toy corpora.
*
*
*Grammars and Topic Models**
**
**Mark Johnson**
**Macquarie University**
**
**11am, 20th September, Alpage Group*
Séminaire ALPAGE
Friday, 20th september, 11h-12h30
salle 127
bâtiment Olympe de Gouges
(8) rue Albert Einstein, 75013
https://www.rocq.inria.fr/alpage-wiki/tiki-index.php?page=seminaire
Context-free grammars have been a cornerstone of theoretical computer
science and computational linguistics since their inception over half a
century ago. Topic models are a newer development in machine learning
that play an important role in document analysis and information
retrieval. It turns out there is a surprising connection between the
two that suggests novel ways of extending both grammars and topic
models. After explaining this connection, I go on to describe
extensions which identify topical multiword collocations and
automatically learn the internal structure of named-entity phrases.
These new models have applications in text data mining and information
retrieval.
****
**
*
Cette série d'exposés est financée par:
Research in Paris Programme - Mairie de Paris
Ecole Normale Supérieure
Ecole des Hautes Etudes en Sciences Sociales
Fondation Pierre Gilles de Gennes
*
**
****
-------------------------------------------------------------------------
Message diffuse par la liste Langage Naturel <LN at cines.fr>
Informations, abonnement : http://www.atala.org/article.php3?id_article=48
English version :
Archives : http://listserv.linguistlist.org/archives/ln.html
http://liste.cines.fr/info/ln
La liste LN est parrainee par l'ATALA (Association pour le Traitement
Automatique des Langues)
Information et adhesion : http://www.atala.org/
-------------------------------------------------------------------------
More information about the Ln
mailing list