25.4407, FYI: SemEval-2015 Task 3: Community Question Answering
The LINGUIST List via LINGUIST
linguist at listserv.linguistlist.org
Tue Nov 4 22:05:03 UTC 2014
LINGUIST List: Vol-25-4407. Tue Nov 04 2014. ISSN: 1069 - 4875.
Subject: 25.4407, FYI: SemEval-2015 Task 3: Community Question Answering
Moderators: Damir Cavar, Indiana U <damir at linguistlist.org>
Malgorzata E. Cavar, Indiana U <gosia at linguistlist.org>
Reviews: reviews at linguistlist.org
Anthony Aristar <aristar at linguistlist.org>
Helen Aristar-Dry <hdry at linguistlist.org>
Sara Couture, Indiana U <sara at linguistlist.org>
Homepage: http://linguistlist.org
Do you want to donate to LINGUIST without spending an extra penny? Bookmark
the Amazon link for your country below; then use it whenever you buy from
Amazon!
USA: http://www.amazon.com/?_encoding=UTF8&tag=linguistlist-20
Britain: http://www.amazon.co.uk/?_encoding=UTF8&tag=linguistlist-21
Germany: http://www.amazon.de/?_encoding=UTF8&tag=linguistlistd-21
Japan: http://www.amazon.co.jp/?_encoding=UTF8&tag=linguistlist-22
Canada: http://www.amazon.ca/?_encoding=UTF8&tag=linguistlistc-20
France: http://www.amazon.fr/?_encoding=UTF8&tag=linguistlistf-21
For more information on the LINGUIST Amazon store please visit our
FAQ at http://linguistlist.org/amazon-faq.cfm.
Editor for this issue: Uliana Kazagasheva <uliana at linguistlist.org>
================================================================
Date: Tue, 04 Nov 2014 17:04:43
From: Preslav Nakov [preslav.nakov at gmail.com]
Subject: SemEval-2015 Task 3: Community Question Answering
E-mail this message to a friend:
http://linguistlist.org/issues/emailmessage/verification.cfm?iss=25-4407.html&submissionid=35975097&topicid=6&msgnumber=1
SemEval-2015 Task 3: Answer Selection in Community Question
Answering
Website: http://alt.qcri.org/semeval2015/task3
Google Group: https://groups.google.com/forum/#!forum/semeval-cqa
Evaluation period: December 5 - 22, 2014
Paper submission: January 30, 2015
Summary
Task:
Answer selection in community question answering data (i.e., user generated
content).
Features:
- The task is related to an application scenario, but it has been decoupled
from the IR component to facilitate participation and focus on the relevant
aspects for the SemEval community
- More challenging task than traditional question answering
- Related to textual entailment, semantic similarity, and NL inference
- Multilingual: Arabic and English
Target:
We target semantically oriented solutions using rich language representations
to see whether they can improve over simpler bag-of-words and word matching
techniques.
Task:
Answer selection in community question answering data (i.e., user generated
content).
Task Description:
Community question answering (QA) systems are gaining popularity online. Such
systems are seldom moderated, quite open, and thus they have little
restrictions, if any, on who can post and who can answer a question. On the
positive side, this means that one can freely ask any question and expect some
good, honest answers. On the negative side, it takes efforts to go through all
possible answers and make sense of them. For example, it is not unusual for a
question to have hundreds of answers, which makes it very time consuming to
the user to inspect and winnow.
We propose a task that can help automate this process by identifying the posts
in the answer thread that answer the question well vs. those that can be
potentially useful to the user (e.g., because they can help educate him/her on
the subject) vs. those that are just bad or useless.
Moreover, for the special case of YES/NO questions we propose an extreme
summarization version of the task, which asks for producing a simple YES/NO
summary of all valid answers.
In short
Subtask A:
Given a question (short title + extended description), and several community
answers, classify each of the answers as
- definitely relevant (good),
- potentially useful (potential), or
- bad or irrelevant (bad, dialog, non-English, other).
Subtask B:
Given a YES/NO question (short title + extended description), and a list of
community answers, decide whether the global answer to the question should be
yes, no or unsure, based on the individual good answers. This subtask is only
available for English.
For a more detailed description of the English and Arabic datasets, please
check here:
http://alt.qcri.org/semeval2015/task3/index.php?id=detailed-task-and-data-desc
ription
Register to participate here:
http://alt.qcri.org/semeval2015/task3/index.php?id=registration
Finally, do not miss the important dates (the evaluation period is from
December 5 to December 22).
Important Dates:
- Evaluation period starts: December 5, 2014
- Evaluation period ends: December 22, 2014
- Paper submission due: January 30, 2015
- Paper notification: Early March, 2015
- Camera-ready due: March 30, 2015
- SemEval-2015 workshop: June 4-5, 2015 (collocated with NAACL'2015)
For questions and doubts please check out our Google Group:
semeval-cqa at googlegroups.com.
Organizers:
Lluís Màrquez, Qatar Computing Research Institute
James Glass, CSAIL-MIT
Walid Magdy, Qatar Computing Research Institute
Alessandro Moschitti, Qatar Computing Research Institute
Preslav Nakov, Qatar Computing Research Institute
Bilal Randeree, Qatar Living
Linguistic Field(s): Computational Linguistics
Subject Language(s): Arabic, Standard (arb)
English (eng)
----------------------------------------------------------
LINGUIST List: Vol-25-4407
----------------------------------------------------------
More information about the LINGUIST
mailing list