24.3189, Calls: Computational Linguistics, Semantics/Ireland

linguist at linguistlist.org linguist at linguistlist.org
Wed Aug 7 15:47:22 UTC 2013


LINGUIST List: Vol-24-3189. Wed Aug 07 2013. ISSN: 1069 - 4875.

Subject: 24.3189, Calls: Computational Linguistics, Semantics/Ireland

Moderator: Damir Cavar, Eastern Michigan U <damir at linguistlist.org>

Reviews: Veronika Drake, U of Wisconsin Madison
Monica Macaulay, U of Wisconsin Madison
Rajiv Rao, U of Wisconsin Madison
Joseph Salmons, U of Wisconsin Madison
Mateja Schuck, U of Wisconsin Madison
Anja Wanner, U of Wisconsin Madison
       <reviews at linguistlist.org>

Homepage: http://linguistlist.org

Do you want to donate to LINGUIST without spending an extra penny? Bookmark
the Amazon link for your country below; then use it whenever you buy from
Amazon!

USA: http://www.amazon.com/?_encoding=UTF8&tag=linguistlist-20
Britain: http://www.amazon.co.uk/?_encoding=UTF8&tag=linguistlist-21
Germany: http://www.amazon.de/?_encoding=UTF8&tag=linguistlistd-21
Japan: http://www.amazon.co.jp/?_encoding=UTF8&tag=linguistlist-22
Canada: http://www.amazon.ca/?_encoding=UTF8&tag=linguistlistc-20
France: http://www.amazon.fr/?_encoding=UTF8&tag=linguistlistf-21

For more information on the LINGUIST Amazon store please visit our
FAQ at http://linguistlist.org/amazon-faq.cfm.

Editor for this issue: Bryn Hauk <bryn at linguistlist.org>
================================================================  


Date: Wed, 07 Aug 2013 11:46:51
From: Preslav Nakov [preslav.nakov at gmail.com]
Subject: 7th International Workshop on Semantic Evaluations

E-mail this message to a friend:
http://linguistlist.org/issues/emailmessage/verification.cfm?iss=24-3189.html&submissionid=18480252&topicid=3&msgnumber=1
 
Full Title: 7th International Workshop on Semantic Evaluations 
Short Title: SemEval-2014 

Date: 29-Aug-2014 - 30-Aug-2014
Location: Dublin, Ireland 
Contact Person: Preslav Nakov
Meeting Email: preslav.nakov at gmail.com
Web Site: http://alt.qcri.org/semeval2014/ 

Linguistic Field(s): Computational Linguistics; Semantics 

Call Deadline: 15-Sep-2013 

Meeting Description:

SemEval-2014: 7th International Workshop on Semantic Evaluations

The SemEval-2014 Workshop will most likely be co-located with COLING.

The SemEval-2015 Workshop will be co-located with a major CL conference. A detailed schedule will be communicated soon.

SemEval Discussion Group:

Please join our discussion group at semeval3 at googlegroups.com in order to receive announcements and participate in discussions.

The SemEval-2014 and SemEval-2015 Websites:

http://alt.qcri.org/semeval2014/
http://alt.qcri.org/semeval2015/

Call for Task Proposals:
 
The SemEval Programme committee invites proposals for tasks to be run as part of SemEval-2014 or SemEval-2015.
 
Starting with SemEval-2015, the organization will be in a two-year cycle, which would give both task organizers and task participants more time for all steps of the process, including data preparation, system design, analysis, and paper writing.
 
However, given the enthusiasm expressed within the community to also have tasks next year, we are also accepting task proposals for 2014. As we will be operating within a very tight schedule for 2014, you should only submit a task proposal for 2014 if you are absolutely sure that you can meet all the deadlines. Otherwise, it will be safer to submit for 2015.
 
We welcome tasks that can test an automatic system for semantic analysis of text, be it application-dependent or application-independent. We especially welcome tasks for different languages and cross-lingual tasks.
 
We encourage the following aspects in task design:
 
Common Data Formats:

To ensure that newer annotations conform to existing annotation standards, we encourage the use of existing data encoding standards such as MASC and UIMA. Where possible, reusing existing annotation standards and tools will make it easier to participate in multiple tasks. Moreover, the use of readily available tools should make it easier for participants to spot bugs and to improve their systems.
 
Common Texts and Multiple Annotations:

For many tasks finding suitable texts for building training and testing datasets in itself can be a challenge or somewhat ad hoc. To make it easier for task organisers to find suitable texts, we encourage the use of resources such as Wikipedia, ANC and OntoNotes. Where this makes sense, the SemEval program committee will encourage task organizers to share the same texts for different tasks. In due time, we hope that this process will allow the generation of multiple semantic annotations for the same text.
 
Baseline Systems:

To lower the obstacles to participation, we encourage the task organizers to provide baseline systems that participants can use as a starting point. A baseline system typically contains code that reads the data, creates a baseline response (e.g., random guessing), and outputs the evaluation results. If possible, baseline systems should be written in widely used programming languages. We also encourage the use of standards such as UIMA.
 
Umbrella Tasks:

To reduce fragmentation of similar tasks, we will encourage task organisers to propose larger tasks that include several subtasks. For example, Word Sense Induction in Japanese and Word Sense Induction in English could be combined into a single umbrella task that includes several subtasks. We welcome task proposals for such larger tasks. In addition, the program committee will actively encourage task organisers proposing similar tasks to combine their efforts into larger umbrella tasks.
 
Application-Oriented Tasks:

We welcome tasks that are devoted to developing novel applications of computational semantics. As an analogy, the TREC Question-Answering (QA) track was solely devoted to building QA systems to compete with current IR systems. Similarly, we will encourage tasks that have a clearly defined end-user application showcasing and are enhancing our understanding of computational semantics, as well as extending the current state-of-the-art.
 
Important Dates:
 
SemEval-2014:
 
Task proposals due: September 15, 2013
Tasks chosen/merged: September 25, 2013
Trial data ready: October 30, 2013 (to be confirmed)
Training data ready: December 15, 2013 (to be confirmed)
Test data ready: March 10, 2014 (to be confirmed)
Evaluation start: March 15, 2014 (to be confirmed)
Evaluation end: March 30, 2014 (to be confirmed)
Paper submission due: April 30, 2014 (to be confirmed)
Paper reviews due: May 30, 2014 (to be confirmed)
Camera ready due: June 30, 2014 (to be confirmed)
SemEval workshop: August 23-30, 2014 (to be confirmed)

SemEval-2015:

A detailed schedule will be communicated soon, but we already welcome short statements of interest if you want to organize a task.

Submission Details:
 
The task proposals should ideally contain the following:

- A summary description of the task (maximum 1 page)
- How the training/testing data will be built and/or procured
- The evaluation methodology to be used, including clear evaluation criteria
- The anticipated availability of the necessary resources to the participants (copyright, etc.)
- The resources required to prepare the task (computation and annotation time, costs of annotations, etc)

If you are not yet at a point to provide outlines of all of these, that is acceptable, but please give some thought to each, and present a sketch of your ideas. We will gladly give feedback.
 
Please submit proposals as soon as possible, preferably by electronic mail in plain ASCII text to the SemEval email address: semeval-organizers at googlegroups.com
 
Chairs:

Preslav Nakov, Qatar Computing Research Institute
Torsten Zesch, University of Duisburg-Essen, Germany







----------------------------------------------------------
LINGUIST List: Vol-24-3189	
----------------------------------------------------------



More information about the LINGUIST mailing list