[Corpora-List] Australia: Task-Focused Summarization and Question Answering Workshop, at Coling-ACL 2006 --- CFP

Timothy Baldwin tim at csse.unimelb.edu.au
Tue Feb 14 11:13:36 UTC 2006


         CALL FOR PAPERS - COLING/ACL 2006 Conference Workshop

               Task-Focused Summarization and Question Answering

               http://research.microsoft.com/~lucyv/WS7.htm

                             Sydney, Australia
                               July 23, 2006


               *** Submission Deadline:  May 1, 2006 ***


                    Multilingual Summarization Evaluation

               http://research.microsoft.com/~lucyv/MSE2006.htm


Workshop Description

This one-day workshop will focus on the challenges that the
Summarization and
QA communities face in developing useful systems and in developing
evaluation
measures.  Our aim is to bring these two communities together to
discuss the
current challenges and to learn from each other's approaches,
following the
success of a similar workshop held at ACL-05, which brought together
the
Machine Translation and Summarization communities.

A previous summarization workshop (Text Summarization Branches Out,
ACL-04)
targeted the exploration of different scenarios for summarization,
such as
small mobile devices, legal texts, speech, dialog, email and other
genres.
We encourage a deeper analysis of these, and other, user scenarios,
focusing
on the utility of summarization and question answering for such
scenarios and
genres, including cross-lingual ones.

By focusing on the measurable benefits that summarization and question
answering has for users, we hope one of the outcomes of this workshop
will be
to better motivate research and focus areas for summarization and
question
answering, and to establish task-appropriate evaluation methods.
Given a user
scenario, it would ideally be possible to demonstrate that a given
evaluation
method predicts greater/lesser utility for users.  We especially
encourage
papers describing intrinsic and extrinsic evaluation metrics in the
context of
these user scenarios.

Both summarization and QA have a long history of evaluations:
Summarization
since 1998 (SUMMAC) and QA since 1999 (TREC). The importance of
summarization
evaluation is evidenced by the many DUC workshops; in DUC-05,
extensive
discussions were held regarding the use of ROUGE, ROUGE-BE, and the
pyramid
method, a semantic-unit based approach, for evaluating summarization
systems.
The QA  community has related evaluation issues for answers to complex
questions such as the TREC definition questions.  Some common
considerations
in both communities include what constitutes a good answer/response to
an
information request, and how does one determine whether a "complex"
answer is
sufficient? In both communities, as well as in the distillation
component of
the 2005 DARPA program GALE, researchers are exploring how to capture
semantic
equivalence among components of different answers (nuggets, factoids
or SCUs).
There also have been efforts to design new automatic scoring measures,
such as
ROUGE-BE and POURPRE.  We encourage papers discussing these and other
metrics
that report on how well the metric correlates with human judgments
and/or
predicts effectiveness in task-focused scenarios for summarization and
QA.

This workshop is a continuation of ACL 2005 for the summarization
community,
In which those interested in evaluation measures participated in a
joint
Workshop on evaluation for summarization and MT. As a sequel to the
ACL 2005
workshop, in which the results of the first Multilingual
multi-document
summarization evaluation (MSE) were presented
(http://www.isi.edu/~cyl/MTSE2005/MLSummEval.html),
we plan to report and discuss the results of the 2006 MSE evaluation.

In summary, we solicit papers on any or all of the following three
topics:

- Task-based user scenarios requiring question answering
(beyond factoids/lists) and/or summarization, across genres and
languages
- Extrinsic and intrinsic evaluations, correlating extrinsic measures
with
outcome of task completion and/or intrinsic measures with human
judgments
previously obtained.
- The 2006 Multilingual Multi-document Summarization Evaluation

Anyone with an interest in summarization, QA and/or evaluation is
encouraged
to participate in the workshop.  We are looking for research papers in
the
aforementioned topics, as well as position papers that identify
limitations in
current approaches and describe promising future research directions.

SUMMARIZATION TASK: Multilingual Summarization Evaluation

Details for MSE 2006 will be available soon at
http://research.microsoft.com/~lucyv/MSE2006.htm.

For description and results of last year's MSE task, please see:
http://www.isi.edu/~cyl/MTSE2005.

Send email to lucy.vanderwende at microsoft.com to be added to MSE
mailing
list.

PAPER FORMAT:

Papers should be no more than 8 pages, formatted following the
guidelines that
will be made available on the conference Web site. The reviewing
process will
be blind, so authors' names, affiliations, and all self-references
should not
be included in the paper. Authors who cannot submit a PDF file
electronically
should contact the organizers at least one week prior to the May 1st
deadline.
Proceedings will be published in conjunction with the main HLT/NAACL
proceedings.

Details on how to submit your paper available on the website or by
contacting
the organizers.

IMPORTANT DATES:

Task-focused Summarization and Question Answering Workshop

Submission Due:                         May 1st
Notification of Acceptance:             May 22nd
Camera-ready papers due:                June 1st
Workshop date:                               July 23, 2006

Multilingual Summarization Evaluation:

Dates to be announced.  Send email to lucy.vanderwende at microsoft.com
to be added to email distribution list.


WORKSHOP ORGANIZERS

Tat-Seng Chua, National University of Singapore;
chuats at comp.nus.edu.eg
Jade Goldstein, U.S. Department of Defense; jgstewa at afterlife.ncsc.mil
Simone Teufel, Cambridge University; simone.teufel at cl.cam.ac.uk
Lucy Vanderwende, Microsoft Research; lucy.vanderwende at microsoft.com

PROGRAM COMMITTEE

Regina Barzilay (MIT)
Sabine Bergler (Concordia University, Canada)
Silviu Cucerzan (Microsoft Research)
Hang Cui (National University of Singapore)
Krzysztof Czuba (Google)
Hal Daume III (USC/ISI)
Hans van Halteren (Radboud University Nijmegen, Netherlands)
Sanda Harabagiu (University of Texas, Dallas)
Chiori Hori (CMU)
Eduard Hovy (USC/ISI)
Hongyan Jing (IBM Research)
Guy Lapalme (University of Montreal)
Geunbae (Gary) Lee (Postech Univ, Korea)
Chin-Yew Lin (USC/ISI)
Inderjeet Mani (MITRE)
Marie-France Moens (Katholieke Universiteit Leuven, Belgium)
Ani Nenkova (Columbia University)
Manabu Okumura (Tokyo Institute of Technology)
John Prager (IBM Research)
Horacio Saggion (University of Sheffield, UK)
Judith Schlesinger (IDA/CCS)
Karen Sparck Jones (University of Cambridge)
Nicola Stokes (University of Melbourne)
Beth Sundheim (SPAWAR Systems Center)
Tomek Strzalkowski (University at Albany)
Ralph Weischedel (BBN)



More information about the Corpora mailing list