[Corpora-List] SemEval-*2015*: Final call for task proposals - due Jan 30!

Preslav Nakov preslavn at gmail.com
Mon Jan 20 03:33:16 UTC 2014


SemEval-2015: International Workshop on Semantic Evaluations

Final Call for Task Proposals

We invite proposals for tasks to be run as part of SemEval-2015.

http://alt.qcri.org/semeval2015/

Starting with 2015, SemEval will run on a two-year cycle, which will give
both task organizers and task participants more time for all steps of the
process, including data preparation, system design, analysis, and paper
writing.

We welcome tasks that can test an automatic system for semantic analysis of
text, be it application-dependent or application-independent. We especially
welcome tasks for different languages and cross-lingual tasks.


We encourage the following aspects in task design:


Common data formats

To ensure that newer annotations conform to existing annotation standards,
we encourage the use of existing data encoding standards such as MASC and
UIMA. Where possible, reusing existing annotation standards and tools will
make it easier to participate in multiple tasks. Moreover, the use of
readily available tools should make it easier for participants to spot bugs
and to improve their systems.


Common texts and multiple annotations

For many tasks, finding suitable texts for building training and testing
datasets in itself can be a challenge or somewhat ad hoc. To make it easier
for task organisers to find suitable texts, we encourage the use of
resources such as Wikipedia, ANC and OntoNotes. Where this makes sense, the
SemEval program committee will encourage task organizers to share the same
texts for different tasks. In due time, we hope that this process will
allow the generation of multiple semantic annotations for the same text.


Baseline systems

To lower the obstacles to participation, we encourage the task organizers
to provide baseline systems that participants can use as a starting point.
A baseline system typically contains code that reads the data, creates a
baseline response (e.g., random guessing), and outputs the evaluation
results. If possible, baseline systems should be written in widely used
programming languages. We also encourage the use of standards such as UIMA.


Umbrella tasks

To reduce fragmentation of similar tasks, we will encourage task organisers
to propose larger tasks that include several subtasks. For example, Word
Sense Induction in Japanese and Word Sense Induction in English could be
combined into a single umbrella task that includes several subtasks. We
welcome task proposals for such larger tasks. In addition, the program
committee will actively encourage task organisers proposing similar tasks
to combine their efforts into larger umbrella tasks.


Application-oriented tasks

We welcome tasks that are devoted to developing novel applications of
computational semantics. As an analogy, the TREC Question-Answering (QA)
track was solely devoted to building QA systems to compete with current IR
systems. Similarly, we will encourage tasks that have a clearly defined
end-user application showcasing and are enhancing our understanding of
computational semantics, as well as extending the current state-of-the-art.


IMPORTANT DATES


SemEval-2015


Task proposals due            January 30, 2014
Tasks chosen/merged           February 28, 2014
Trial data ready              April 30, 2014
Training data ready           July 30, 2014
Test data ready               October 2014
Evaluation start              November 15, 2014
Evaluation end                November 30, 2014
Paper submission due          January 30, 2015
Paper reviews due             February 28, 2015
Camera ready due              March 30, 2015
SemEval workshop              Summer 2015

The SemEval-2015 Workshop will be co-located with a major NLP conference in
2015.



SUBMISSION DETAILS


The task proposals should ideally contain the following:

A summary description of the task
How the training/testing data will be built and/or procured
The evaluation methodology to be used, including clear evaluation criteria
The anticipated availability of the necessary resources to the participants
(copyright, etc.)
The resources required to prepare the task (computation and annotation
time, costs of annotations, etc)

If you are not yet at a point to provide outlines of all of these, that is
acceptable, but please give some thought to each, and present a sketch of
your ideas. We will gladly give feedback.


Please submit proposals as soon as possible, preferably by electronic mail
in PDF format to the SemEval email address:

semeval-organizers at googlegroups.com


CHAIRS

Preslav Nakov, Qatar Computing Research Institute

Torsten Zesch, University of Duisburg-Essen, Germany


The SemEval DISCUSSION GROUP

Please join our discussion group at semeval3 at googlegroups.com in order to
receive announcements and participate in discussions.


The SemEval-2015 Website:

http://alt.qcri.org/semeval2015/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/corpora/attachments/20140120/4f8dbbd7/attachment.htm>
-------------- next part --------------
_______________________________________________
UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora


More information about the Corpora mailing list