Corpora: 2nd Call for Papers for the Journal of Natural Language Engineering

Beverly Nunan bnunan at linus.mitre.org
Fri Jan 5 21:13:46 UTC 2001


Attached is the second "Call for Papers" for a special issue on
question answering for the Journal of Natural Language Engineering.
If you have any questions, please direct them to Dr. Lynette Hirschman,
781-271-7789, or email, lynette at mitre.org.  Thank you.

-------------- next part --------------
2nd CALL FOR PAPERS

JOURNAL OF NATURAL LANGUAGE ENGINEERING

SPECIAL ISSUE ON QUESTION ANSWERING

Guest editors: 
Lynette Hirschman (MITRE) 
Robert Gaizauskas (University of Sheffield)


As users struggle to navigate the wealth of on-line information now
available, the need for automated question answering systems becomes
more urgent: specifically, for systems that would allow a user to ask a
question in everyday language and get the answer quickly, with back-up
material available on demand. Question answering has become, over the
past several years, a major focus of research activity. This Call for
Papers solicits submissions that discuss the performance, the
requirements, the uses, and the challenges of question answering
systems. 

Question answering systems provide a rich research area.  To answer a
question, a system must analyze the question, perhaps in the context of
some ongoing interaction; it must find one or more answers by consulting
on-line resources; and it must present the answer to the user in some
appropriate form, perhaps associated with justification or supporting
materials. 

Several conferences and workshops have focused on aspects of the
question answering research area. For the past two years, the Text
Retrieval Conference (TREC) (http://trec.nist.gov) has sponsored a
question-answering track which has evaluated systems that answer factual
questions based on finding answer strings in the TREC corpus, using both
information retrieval and natural language processing techniques. A
focus on reading comprehension provides a different approach to question
answering, evaluating systems' ability to answer questions about a
specific reading passage. These kinds of tests are used to evaluate
students' comprehension, providing a basis for comparing system
performance to human performance. This was the subject of a Johns
Hopkins Summer Workshop,
http://www.clsp.jhu.edu/ws2000/groups/reading/prj_desc.shtml.

Both of these research areas have had to address a number of difficult
questions:
? How can question answering systems be evaluated? Do we have to have
human graders, or can we find automated ways of grading short answer
tests that approximate human graders closely enough?
? How should questions and answers be classified? Should classifications
be based on linguistic features of questions and answers? On the types
and sources of knowledge used to derive answers? On the types of
processing required to derive answers? 
? What makes a question hard? Can we define linguistic features that
help to predict question difficulty?
? Can we identify different classes of users of question answering
systems, and if so, what are their different requirements?
? What makes an answer good? Should answers be short? Long? What about
sentence extracts compared to generated text? What about summaries?
? What is the best way to present answers to a user? How much context
and justification is appropriate? How much drill down needs to be
supported?
? Do question answering systems need to build models of users' knowledge
states to generate appropriate answers? How can this process be managed?
? What are reasonable expectations for question answering systems:
providing factual answers found literally in texts, providing factual
answers inferred from texts, providing summaries of multiple sources,
providing analysis?
? How does the performance of systems compare to the performance of
people? Can such systems complement people? Teach people? Replace
people?
? Is it possible to create domain-independent question answering
systems, or is it critical to restrict the domain of such a system to a
specific topic area? What are the trade-offs in terms of performance?
? Can a question answering system use spoken input? Can it retrieve
information from spoken "documents" such as news stories or interviews?
What are the performance penalties when dealing with the additional
uncertainty that characterizes speech or OCR?


We invite submission of papers addressing any of these questions, or
other issues related to the creation, evaluation, or deployment of
question answering systems. We also encourage submissions that address
infrastructure issues, such as tools for building question answering
systems, for collecting corpora, or for annotating collections. 

Submission Information

Submit full papers of no more than 25 pages (exclusive of references),
twelve point, double-spaced, with one inch margins before the initial
submission deadline. Submissions not conforming to these guidelines will
not be reviewed.

Email submission is preferred, and should be directed to the special
issue editors at the email address: lynette at mitre.org. The subject line
should read: JNLE QA Submission. Preferred email submission formats are:
Word, PostScript, PDF, or plain text (for papers without complex
figures, etc).

If email submission is not possible, then five copies of the paper
should be mailed to:

Dr. Lynette Hirschman
The MITRE Corporation 3K-157
202 Burlington Rd.
Bedford, MA 01730
USA

Phone:   781-271-7789
Fax:     781-271-2352

Mailed submissions must arrive on or before the deadline for submission.

Submission Dates

   * Submissions are due on February 26, 2001
   * Notification of acceptance will be given by April 23, 2001.
   * Camera-ready copy due July2, 2001   
   * Publication: Fall-Winter 2001


More information about the Corpora mailing list