28.2286, Calls: English, Computational Linguistics/Taiwan

The LINGUIST List linguist at listserv.linguistlist.org
Tue May 23 14:20:06 UTC 2017


LINGUIST List: Vol-28-2286. Tue May 23 2017. ISSN: 1069 - 4875.

Subject: 28.2286, Calls: English, Computational Linguistics/Taiwan

Moderators: linguist at linguistlist.org (Damir Cavar, Malgorzata E. Cavar)
Reviews: reviews at linguistlist.org (Helen Aristar-Dry, Robert Coté,
                                   Michael Czerniakowski)
Homepage: http://linguistlist.org

Please support the LL editors and operation with a donation at:
           http://funddrive.linguistlist.org/donate/

Editor for this issue: Kenneth Steimel <ken at linguistlist.org>
================================================================


Date: Tue, 23 May 2017 10:19:54
From: Anil Kumar Singh [nlprnd at gmail.com]
Subject: IJCNLP-2017 Shared Task on Review Opinion Diversification

 
Full Title: IJCNLP-2017 Shared Task on Review Opinion Diversification 
Short Title: RevOpiD-2017 

Date: 01-Dec-2017 - 01-Dec-2017
Location: Taipei, Taiwan 
Contact Person: Anil Kumar Singh
Meeting Email: nlprnd at gmail.com
Web Site: https://sites.google.com/itbhu.ac.in/revopid-2017 

Linguistic Field(s): Computational Linguistics 

Subject Language(s): English (eng)

Call Deadline: 28-Aug-2017 

Meeting Description:

Participants will build systems to rank top-k reviews as a summary of opinions
of product reviews in three different ways. The shared task will use a subset
of Amazon SNAP product reviews dataset for experiments.  It contains reviews
of products in different categories, e.g., 22,000,000+ reviews of books and
7,000,000+ reviews of electronics products.


Call for Participation:

IJCNLP-2017 Shared Task on Review Opinion Diversification

Website: https://sites.google.com/itbhu.ac.in/revopid-2017

Contact email: revopid-org-2017 at googlegroups.com

The shared task aims at producing, for each product, top-k reviews from a set
of reviews such that the selected top-k reviews act as a summary of all the
opinions expressed in the reviews set. The three independent subtasks
incorporate three different ways of selecting the top-k reviews, based on
helpfulness, representativeness and exhaustiveness of the opinions expressed
in the review set.

Task Description:

The shared task consists of three independent subtasks. Participating systems
are required to produce a top-k summarized ranking of reviews (one ranked list
for each product for a given subtask) from amongst the given set of reviews.
The redundancy of opinions expressed in the review corpus must be minimised,
along with maximisation of a certain property. This property can be one of the
following (one property corresponds to one subtask):

1) Usefulness rating of the review
2) Representativeness of the overall corpus of reviews
3) Exhaustiveness of opinions expressed

Data and Resources:

The training, development and test data will be extracted and annotated from
Amazon SNAP Review Dataset and will be available on the website according to
the given schedule.

Evaluation:

Evaluation scripts will be made available on the website.

nDCG (normalized Discounted Cumulative Gain) is tentatively the primary
measure of evaluation. Being an introductory task, we will evaluate the system
submissions on a wide range of measures for experimental reasons. These
secondary evaluations will not reflect in the scoring of participating
systems. More details can be found on the website.

Invitation:

We invite participation from all researchers and practitioners. The organizers
rely, as is usual in shared tasks, on the honesty of all participants who
might have some prior knowledge of part of the data that will eventually be
used for evaluation, not to unfairly use such knowledge. The only exceptions
(to participation) are the members of the organizing team, who cannot submit a
system. The organizing chair will serve as an authority to resolve any
disputes concerning ethical issues or completeness of system descriptions.

Timeline:

Shared Task Website Ready: May 1, 2017
First Call for Participants Ready: May 1, 2017
Registration Begins: May 15, 2017
Release of Training Data: May 15, 2017
Dryrun: Release of Development Set: July 20, 2017
Dryrun: Submission on Development Set: July 26, 2017
Dryrun: Release of Scores: July 27, 2017
Registration Ends: August 18, 2017
Release of Test Set: August 21, 2017
Submission of Systems: August 28, 2017
System Results: September 5, 2017
System Description Paper Due: September 15, 2017
Notification of Acceptance: September 30, 2017
Camera-Ready Deadline: October 10, 2017

See https://sites.google.com/itbhu.ac.in/revopid-2017 for more information.




------------------------------------------------------------------------------

*****************    LINGUIST List Support    *****************
Please support the LL editors and operation with a donation at:
            http://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-28-2286	
----------------------------------------------------------






More information about the LINGUIST mailing list