Appel: 4 NAACL'01 announcements
Philippe Blache
pb at lpl.univ-aix.fr
Wed Jan 31 18:02:48 UTC 2001
______________________________________________________________________
1/ From: Priscilla Rasmussen <rasmusse at cs.rutgers.edu>
Subject: NAACL-01 WordNet and Other Lexical Resources Workshop
2/ From: Priscilla Rasmussen <rasmusse at cs.rutgers.edu>
Subject: NAACL'01 Automatic Summarization Workshop--DEADLINE EXTENSION
3/ From: Priscilla Rasmussen <rasmusse at cs.rutgers.edu>
Subject: NAACL-01 Adaptation in Dialogue Systems Workshop
4/ From: Priscilla Rasmussen <rasmusse at cs.rutgers.edu>
Subject: NAACL-01 Machine Translation Evaluation Workshop
______________________________________________________________________
1/ From: Priscilla Rasmussen <rasmusse at cs.rutgers.edu>
Subject: NAACL-01 WordNet and Other Lexical Resources Workshop
WordNet and Other Lexical Resources:
Applications, Extensions and Customizations
* Please note merger and extended deadline! *
NAACL 2001 Workshop
Carnegie Mellon University, Pittsburgh
3 and 4 June, 2001
Sponsored by the Association for Computational Linguistics Special
Interest Group on the Lexicon.
Previously announced as two different workshops:
- WordNet: Extensions and NLP Applications
- Customizing Lexical Resources
Lexical resources have become important basic tools within NLP and
related fields. The range of resources available to the researcher is
diverse and vast - from simple word lists to complex MRDs and
thesauruses. The resources contain a whole range of different types of
explicit linguistic information presented in different formats and at
various
levels of granularity. Also, much information is left implicit in the
description, e.g. the definition of lexical entries generally contains
genus, encyclopaedic and usage information.
The majority of resources used by NLP researchers were not intended
for computational uses. For instance, MRDs are a by-product of the
dictionary publishing industry, and WordNet was an experiment in
modelling the mental lexicon.
In particular, WordNet has become a valuable resource in the human
language technology and artificial intelligence. Due to its vast
coverage of English words, WordNet provides with general
lexico-semantic information on which open-domain text processing is
based. Furthermore, the development of WordNets in several other
languages extends this capability to trans-lingual applications,
enabling text mining across languages. For example, in Europe, WordNet
has been used as the starting point for the development of a
multilingual database for several European languages (the EuroWordNet
project).
Other resources such as the Longman Dictionary of Contemporary English
and Roget's Thesaurus have also been used for various NLP tasks.
The topic of this workshop is the exploitation of existing resources
for particular computational tasks such as Word Sense Disambiguation,
Generation, Information Retrieval, Information Extraction, Question
Answering and Summarization. We invite paper submissions that include
but are not limited to the following topics:
- Resource usage in NLP and AI
- Resource extension in order to reflect the lexical coverage within a
particular domain;
- Resource augmentation by e.g. adding extra word senses, enriching
the information associated with the existing entries.
For instance, recently, several extensions of the WordNet lexical
database have been initiated, in the United States and abroad, with
the goal of providing the NLP community with additional knowledge that
models pragmatic information not always present in the texts but
required by document processing;
- Improvement of the consistency or quality of resources by
e.g. homogenizing lexical descriptions, making implicit lexical
knowledge explicit and clustering word senses;
- Merging resources, i.e. combining the information in more than one
resource e.g. by producing a mapping between their senses. For
instance, WordNet has been incorporated in several other linguistic
and general knowledge bases (e.g. FrameNet and CYC);
- Corpus-based acquisition of knowledge;
- Mining common sense knowledge from resources;
- Multilingual WordNets and applications;
Paper submission
Submissions must use the NAACL latex style or Microsoft Word style. Paper
submissions should consist of a full paper (6 pages or less).
NAACL style file
NAACL bibliography style file
Latex sample file
Microsoft Word Template file
Submission procedure
Electronic submission only. For U.S. papers please send the pdf or
postscript file of your paper to: moldovan at seas.smu.edu. Please submit
papers from other countries to w.peters at dcs.shef.ac.uk.
Because review is blind, no author information is included as part of
the paper.
A separate identification page must be sent by email including title,
all authors, theme area, keywords, word count, and an abstract of no
more than 5 lines. Late submissions will not be accepted. Notification
of receipt will be e-mailed to the first author shortly after
receipt.
Please address any questions to moldovan at seas.smu.edu or
w.peters at dcs.shef.ac.uk
Important dates
Paper submission deadline: February 20, 2001
Notification of acceptance: March 10, 2001
Camera ready due: March 25, 2001
Workshop date: June 3 and 4, 2001
Organizers
Sanda Harabagiu, SMU, sanda at seas.smu.edu
Dan Moldovan, SMU, moldovan at seas.smu.edu
Wim Peters, University of Sheffield, wim at dcs.shef.ac.uk
Mark Stevenson, University of Sheffield, marks at dcs.shef.ac.uk
Yorick Wilks, University of Sheffield, yorick at dcs.shef.ac.uk
Programme Committee
Roberto Basili (Universita di Roma Tor Vergata)
Martin Chodorow (Hunter College of CUNY)
Christianen Fellbaum (Princeton University)
Ken Haase (MIT)
Sanda Harabagiu (SMU)
Graeme Hirst (University of Toronto)
Robert Krovetz, NEC
Claudia Leacock (ETS)
Steven Maiorano (AAT)
Rada Mihalcea (SMU)
Dan Moldovan (SMU)
Simonetta Montemagni (Istituto di Linguistica Computazionale, Pisa)
Martha Palmer (University of Pennsylvania)
Maria Tereza Pazienza (Universita di Roma Tor Vergata)
Wim Peters (University of Sheffield)
German Rigau (Universitat Politecnica de Catalunya)
Mark Stevenson (University of Sheffield)
Randee Tengi (Princeton University)
Paola Velardi (University of Roma "La Sapienza")
Ellen Voorhees (NIST)
Piek Vossen (Sail Labs)
Yorick Wilks (University of Sheffield)
Workshop URL:
http://www.seas.smu.edu/~moldovan/mwnw/
______________________________________________________________________
2/ From: Priscilla Rasmussen <rasmusse at cs.rutgers.edu>
Subject: NAACL'01 Automatic Summarization Workshop--DEADLINE EXTENSION
Workshop on Automatic Summarization 2001
(pre-conference workshop in conjunction with NAACL2001)
Sunday, June 3, 2001
Pittsburgh, Pennsylvania, USA
sponsored by
ACL (Association for Computational Linguistics)
MITRE Corporation
New submission deadline: February 23, 2001
Organizing Committee:
Jade Goldstein Carnegie Mellon University jade+ at cs.cmu.edu
Chin-Yew Lin USC/Information Sciences Institute cyl at isi.edu
Program Committee:
Breck Baldwin Baldwin Language Tech
Hsin-Hsi Chen National Taiwan University
Udo Hahn Universitaet Freiburg
Eduard Hovy USC/Information Sciences Institute
Hongyan Jing Columbia University
Elizabeth Liddy Syracuse University
Daniel Marcu USC/Information Sciences Institute
Inderjeet Mani MITRE
Shigeru Masuyama Toyohashi University of Technology
Marie-Francine Moens Katholieke Universiteit Leuven
Vibhu Mittal Google Research
Sung Hyon Myaeng Chungnam National University
Akitoshi Okumura NEC
Chris Paice Lancaster University
Dragomir Radev University of Michigan, Ann Arbor
Karen Sparck-Jones University of Cambridge
Tomek Strzalkowski State University of New York,
Albany
Simone Teufel Columbia University
Workshop Website:
http://www.isi.edu/~cyl/was-naacl2001 (for the latest update)
I. OVERVIEW
II. CALL FOR PAPERS
III. FORMAT FOR SUBMISSION
I. OVERVIEW
The problem of automatic summarization poses a variety of tough challenges
in both NL understanding and generation. A spate of recent papers and
tutorials on this subject at conferences such as ACL, ANLP/NAACL, ACL/EACL,
AAAI, ECAI, IJCAI, and SIGIR point to a growing interest in research in this
field. Several commercial summarization products have also appeared. There
have been several workshops in the past on this subject: Dagstuhl in 94,
ACL/EACL in 97, the AAAI Spring Symposium in 98, and ANLP/NAACL in 2000. All
of these were extremely successful, and the field is now enjoying a period
of revival and is advancing at a much quicker pace than before. NAACL'2001
is an ideal occasion to host another workshop on this problem.
II. CALL FOR PAPERS
The Workshop on Automatic Summarization program committee invites papers
addressing (but not limited to):
Summarization Methods:
use of linguistic representations,
statistical models,
NL generation for summarization,
production of abstracts and extracts,
multi-document summarization,
narrative techniques in summarization,
multilingual summarization,
text compaction,
multimodal summarization (including summarization of
audio),
use of information extraction,
studies and modeling of human summarizers,
improving summary coherence,
concept fusion,
use of thesauri and ontologies,
trainable summarizers,
applications of machine learning,
knowledge-rich methods.
Summarization Resources:
development of corpora for training and evaluating
summarizers,
annotation standards,
shared summarization tools,
document segmentation,
topic detection, and
clustering related to summarization.
Evaluation Methods:
intrinsic and extrinsic measures,
on-line and off-line evaluations,
standards for evaluation,
task-based evaluation scenarios,
user studies,
inter-judge agreement.
Workshop Themes:
1. Summarization Applications
2. Multidocument Summarization
3. Multilingual Text Summarization
4. Evaluation and Text/Training Corpora
5. Generation for Summarization
6. Topic Identification for Summarization
7. Integration with Web and IR Access
III. FORMAT FOR SUBMISSION
Submissions must use the ACL latex style or Microsoft Word style
WAS-submission.doc (both available from the Automatic Summarization workshop
web page). Paper submissions should consist of a full paper (5000 words or
less, including references).
SUBMISSION QUESTIONS
Please send submission questions to cyl at isi.edu
SUBMISSION PROCEDURE
Electronic submission only: send the pdf (preferred), postscript, or MS Word
form of your submission to: cyl at isi.edu. The Subject line should be
"NAACL2001 WORKSHOP PAPER SUBMISSION". Because reviewing is blind, no author
information is included as part of the paper. An identification page must be
sent in a separate email with the subject line: "NAACL2001 WORKSHOP ID PAGE"
and must include title, all authors, theme area, keywords, word count, and
an abstract of no more than 5 lines. Late submissions will not be accepted.
Notification of receipt will be e-mailed to the first author shortly after
receipt.
DEADLINES (Tentative)
Paper submission deadline: Feburary 23, 2001
Notification of acceptance for papers: March 23, 2001
Camera ready papers due: April 6, 2001
Workshop date: June 3, 2001
______________________________________________________________________
3/ From: Priscilla Rasmussen <rasmusse at cs.rutgers.edu>
Subject: NAACL-01 Adaptation in Dialogue Systems Workshop
* Note deadline extension! *
NAACL 2001 Workshop on
Adaptation in Dialogue Systems
Webpage: www.cs.utah.edu/~cindi/AdaptDial.html
Overview
The purpose of this workshop is to bring together researchers
investigating the application of learning and adaptation to dialogue
systems, both speech and text based.
Methods for learning and adaptation show promise for enhancing the
robustness, flexibility, and overall accuracy of dialogue systems. While
researchers in many parts of computational linguistics who use these
methods have begun to form communities, the burgeoning set of activities
within dialogue has remained relatively disparate. We are interested in
adaptation that includes learning procedures as well as decision making
methods aimed at dynamically reconfiguring dialogue behavior based on the
context. We would also like to explore techniques that allow a dialogue
system to learn with experience or from data sets gathered from empirical
studies. Researchers looking at methods to automatically improve different
modules of dialogue systems, or the system as a whole, have not had many
opportunities to come together to share their work. We thus welcome
submissions from researchers supplementing the traditional development of
dialogue systems with techniques from machine learning, statistical NLP,
and decision theory.
Call For Papers
We solicit papers from a number of research areas, including:
- Use of machine learning techniques at all levels of dialogue, from
speech recognition to generation; from dialogue strategy to user
modeling
- Adapting to the user as a dialogue progresses
- Dialogue as decision making under uncertainty
- User and user group modeling
- Use of corpora in developing components of dialogue systems,
including issues in annotation
- Evaluation of adaptive dialogue systems
- Comparison of different techniques in applying adaptive techniques to
dialogue
We also hope to include a session for the demonstration of working
systems, as time permits. The demonstration sessions will be open to
anyone who wishes to bring their adaptive conversational systems for
demonstration to other members of the workshop. Presenters are asked to
submit a paper that is specifically directed at a demonstration of their
current systems.
Important Dates (2001):
Paper submission deadline: Feb 19
Notification of acceptance for papers: Mar 16
Camera ready papers due: Mar 30
Workshop date: Jun 4
Paper Submission
Electronic submission strongly preferred. We will be setting up an email
alias in the next several days for paper submission. Please check with
the web page for developments.
Submissions must use the NAACL latex style or Microsoft Word style. Paper
submissions should consist of a full paper (6 pages or less). The
templates are available at the workshop web site.
Organizers
Eric Horvitz Microsoft Research horvitz at microsoft.com
Tim Paek Microsoft Research timpaek at microsoft.com
Cindi Thompson University of Utah cindi at cs.utah.edu
Program Committee
Jennifer Chu-Carroll Bell Labs
Peter Heeman Oregon Graduate Institute
Diane Litman AT & T Labs
Candace Sidner MERL
Marilyn Walker AT & T Labs
______________________________________________________________________
4/ From: Priscilla Rasmussen <rasmusse at cs.rutgers.edu>
Subject: NAACL-01 Machine Translation Evaluation Workshop
CALL FOR PARTICIPATION
Workshop on Machine Translation Evaluation
in conjunction with NAACL-2001
WORKSHOP ON MT EVALUATION:
Hands-On Evaluation
3 June, 2001
Pittsburgh, PA
United States
MOTIVATION
Evaluation of language tools, particularly tools that generate language,
remains an interesting and general problem. Machine Translation (MT) is a
prime example. Approaches to evaluating MT are even more plentiful than
approaches to MT itself; the number of evaluations and range of variants is
confusing to anyone considering an evaluation. In an effort to systematize
MT evaluation, the NSF-funded ISLE project has created a taxonomy of
evaluation-related features and measures. Unfortunately, however, many prior
evaluations do not include an adequate specification of important aspects
such as evaluation process complexity, cost, variance of score, etc.
In an effort to drive MT evaluation to the next level, this workshop will
focus on exercising with methods of acquiring such information for several
important MT evaluation measures. The workshop thus embodies the challenge
of Hands-On Evaluation, within the context of the framework being developed
by the ISLE MT Evaluation effort. The workshop follows a workshop on MT
Evaluation held at the AMTA Conference in Cuernavaca, Mexico, in October
2000, and a subsequent workshop being planned for April 2001 in Geneva.
STRUCTURE OF THE WORKSHOP
The first part of the workshop will introduce the ISLE MT Evaluation effort,
funded by NSF and the EU, to create a general framework of characteristics
in terms of which MT evaluations, past and future, can be described and
classified. The framework, whose antecedents are the JEIDA and EAGLES
reports, consists of taxonomies of increasingly specific features, with
associated measures and pointers to systems. The discussion will review the
current state of the classification effort as well as review the MT
evaluation history from which it was drawn.
The second, principal, part of the workshop will focus on real-world
evaluation. In an effort to facilitate common ground for discussion,
participants will be given specific evaluation exercises, defined by the
taxonomy and recent MT evaluation trends. In addition, they will be given a
set of texts generated by MT systems and human reference translations. They
will be asked, during the workshop, to perform given evaluation exercises
with the given data. This common framework will give insights into the
evaluation process and useful metrics for driving the development process.
The results of the exercises will then be presented by the participants,
synthesized into a uniform description of each evaluation, and added to the
ISLE taxonomy, which has been made available on the web for future analysis
in MT evaluation. The results of the workshop will also be incorporated into
a publicly available resource and the workbook from the workshop will be
able to be used by teachers of evaluation and MT.
QUESTIONS AND ISSUES
Since this is a hands-on workshop, participants will be asked to submit an
intent to participate. At that time, they will be able to download the
relevant data for review. During the workshop, they will be given a series
of exercises and split into teams for working these exercises. The result of
the workshop will be at least one paper which addresses the following
threads of investigation within the framework:
* What is the variance inherent in an evaluation measure?
* How complex is it to employ a measure?
* What task(s) is the evaluation measure suited to?
* What kinds of tools automate the evaluation process?
* What kind of metrics are useful for users versus system developers?
* How can we use the evaluation process to speed up or improve the MT
development process?
* What kind of impacts does real-world data have?
* How can we evaluate MT when MT is a small part of the data flow?
* How independent is MT of the subsequent processing? That is, cleaning
up the data improves performance, but does it improve it enough? How do
we quantify that?
TO REGISTER
Since this is a hands-on workshop, no papers are being solicited.
Participants will be expected to take part in the exercises and report their
conclusions. They will additionally be encouraged to contribute to a summary
paper of the workshop proceedings. The data will be sent to participants in
advance of the workshop, with instructions on what to do and what to
prepare. The amount of work required should not exceed 4 hours (much less
than paper preparation).
To register an intent to participate, please send a paragraph outlining your
interest in MT, experience with MT evaluation, knowledge of either Spanish
or Arabic, and the following contact information to Flo Reeder (contact info
below):
* name
* address
* e-mail address
* knowledge of other foreign languages
* translation domain specialization
Participants will need to register for the workshop as part of their NAACL
registration.
IMPORTANT DATES
Intent to Participate: April 16, 2001
Release of Data: April 23, 2001
Workshop date: June 3, 2001
CONTACT POINTS
Florence Reeder
MITRE Corporation
1820 Dolley Madison Blvd.
McLean, VA 22102-3481
TEL: 703-883-7156
FAX: 703-883-1379
EMAIL: freeder at mitre.org
Eduard Hovy
Information Sciences Institute
University of Southern California
4676 Admiralty Way
Marina del Rey, CA 90292-6695
TEL: 310-448-8731
FAX: 310-823-6714
EMAIL: hovy at isi.edu
Workshop URL: http://www.isi.edu/natural-language/mt-eval-naacl.html
___________________________________________________________________
Message diffusé par la liste Langage Naturel <LN at cines.fr>
Informations, abonnement : http://www.biomath.jussieu.fr/LN/LN-F/
English version : http://www.biomath.jussieu.fr/LN/LN/
Archives : http://web-lli.univ-paris13.fr/ln/
More information about the Ln
mailing list