Appel: JNLE

Philippe Blache pb at lpl.univ-aix.fr
Tue Jan 16 13:32:09 UTC 2001


From: Beverly Nunan <bnunan at linus.mitre.org>

2nd CALL FOR PAPERS

JOURNAL OF NATURAL LANGUAGE ENGINEERING

SPECIAL ISSUE ON QUESTION ANSWERING

Guest editors:=20
Lynette Hirschman (MITRE)=20
Robert Gaizauskas (University of Sheffield)


As users struggle to navigate the wealth of on-line information now
available, the need for automated question answering systems becomes
more urgent: specifically, for systems that would allow a user to ask a
question in everyday language and get the answer quickly, with back-up
material available on demand. Question answering has become, over the
past several years, a major focus of research activity. This Call for
Papers solicits submissions that discuss the performance, the
requirements, the uses, and the challenges of question answering
systems.=20

Question answering systems provide a rich research area.  To answer a
question, a system must analyze the question, perhaps in the context of
some ongoing interaction; it must find one or more answers by consulting
on-line resources; and it must present the answer to the user in some
appropriate form, perhaps associated with justification or supporting
materials.=20

Several conferences and workshops have focused on aspects of the
question answering research area. For the past two years, the Text
Retrieval Conference (TREC) (http://trec.nist.gov) has sponsored a
question-answering track which has evaluated systems that answer factual
questions based on finding answer strings in the TREC corpus, using both
information retrieval and natural language processing techniques. A
focus on reading comprehension provides a different approach to question
answering, evaluating systems' ability to answer questions about a
specific reading passage. These kinds of tests are used to evaluate
students' comprehension, providing a basis for comparing system
performance to human performance. This was the subject of a Johns
Hopkins Summer Workshop,
http://www.clsp.jhu.edu/ws2000/groups/reading/prj_desc.shtml.

Both of these research areas have had to address a number of difficult
questions:
=B7 How can question answering systems be evaluated? Do we have to have
human graders, or can we find automated ways of grading short answer
tests that approximate human graders closely enough?
=B7 How should questions and answers be classified? Should classifications
be based on linguistic features of questions and answers? On the types
and sources of knowledge used to derive answers? On the types of
processing required to derive answers?=20
=B7 What makes a question hard? Can we define linguistic features that
help to predict question difficulty?
=B7 Can we identify different classes of users of question answering
systems, and if so, what are their different requirements?
=B7 What makes an answer good? Should answers be short? Long? What about
sentence extracts compared to generated text? What about summaries?
=B7 What is the best way to present answers to a user? How much context
and justification is appropriate? How much drill down needs to be
supported?
=B7 Do question answering systems need to build models of users' knowledge
states to generate appropriate answers? How can this process be managed?
=B7 What are reasonable expectations for question answering systems:
providing factual answers found literally in texts, providing factual
answers inferred from texts, providing summaries of multiple sources,
providing analysis?
=B7 How does the performance of systems compare to the performance of
people? Can such systems complement people? Teach people? Replace
people?
=B7 Is it possible to create domain-independent question answering
systems, or is it critical to restrict the domain of such a system to a
specific topic area? What are the trade-offs in terms of performance?
=B7 Can a question answering system use spoken input? Can it retrieve
information from spoken "documents" such as news stories or interviews?
What are the performance penalties when dealing with the additional
uncertainty that characterizes speech or OCR?


We invite submission of papers addressing any of these questions, or
other issues related to the creation, evaluation, or deployment of
question answering systems. We also encourage submissions that address
infrastructure issues, such as tools for building question answering
systems, for collecting corpora, or for annotating collections.=20

Submission Information

Submit full papers of no more than 25 pages (exclusive of references),
twelve point, double-spaced, with one inch margins before the initial
submission deadline. Submissions not conforming to these guidelines will
not be reviewed.

Email submission is preferred, and should be directed to the special
issue editors at the email address: lynette at mitre.org. The subject line
should read: JNLE QA Submission. Preferred email submission formats are:
Word, PostScript, PDF, or plain text (for papers without complex
figures, etc).

If email submission is not possible, then five copies of the paper
should be mailed to:

Dr. Lynette Hirschman
The MITRE Corporation 3K-157
202 Burlington Rd.
Bedford, MA 01730
USA

Phone:   781-271-7789
Fax:     781-271-2352

Mailed submissions must arrive on or before the deadline for submission.

Submission Dates

   * Submissions are due on February 26, 2001
   * Notification of acceptance will be given by April 23, 2001.
   * Camera-ready copy due July2, 2001  =20
   * Publication: Fall-Winter 2001

--Bed_of_Clams_863_000--

___________________________________________________________________
Message diffusé par la liste Langage Naturel <LN at cines.fr>
Informations, abonnement : http://www.biomath.jussieu.fr/LN/LN-F/
English version          : http://www.biomath.jussieu.fr/LN/LN/
Archives                 : http://web-lli.univ-paris13.fr/ln/



More information about the Ln mailing list