[Corpora-List] SemEval-2014 - Final Call for Task Proposals

Torsten Zesch zesch at ukp.informatik.tu-darmstadt.de
Fri Aug 30 11:08:04 UTC 2013


SemEval-2014: 7th International Workshop on Semantic Evaluations

Final Call for Task Proposals

The SemEval program committee invites proposals for tasks to be run
as part of SemEval-2014.

Starting with SemEval-2015, the organization will be in a two-year cycle,
which would give both task organizers and task participants more time
for all steps of the process, including data preparation, system design,
analysis, and paper writing.

However, given the enthusiasm expressed within the community to also have
tasks next year, we are also accepting task proposals for 2014. As we
will be operating within a very tight schedule for 2014, you should only
submit a task proposal for 2014 if you are absolutely sure that you can
meet all the deadlines. Otherwise, it will be safer to submit for 2015.

We welcome tasks that can test an automatic system for semantic analysis
of text, be it application-dependent or application-independent. We
especially welcome tasks for different languages and cross-lingual tasks.

We encourage the following aspects in task design:

Common data formats
To ensure that newer annotations conform to existing annotation standards,
we encourage the use of existing data encoding standards such as MASC and
UIMA. Where possible, reusing existing annotation standards and tools will
make it easier to participate in multiple tasks. Moreover, the use of
readily available tools should make it easier for participants to spot
bugs and to improve their systems.

Common texts and multiple annotations
For many tasks finding suitable texts for building training and testing
datasets in itself can be a challenge or somewhat ad hoc. To make it
easier for task organisers to find suitable texts, we encourage the use
of resources such as Wikipedia, ANC and OntoNotes. Where this makes sense,
the SemEval program committee will encourage task organizers to share the
same texts for different tasks. In due time, we hope that this process
will allow the generation of multiple semantic annotations for the same
text.

Baseline systems
To lower the obstacles to participation, we encourage the task organizers
to provide baseline systems that participants can use as a starting point.
A baseline system typically contains code that reads the data, creates a
baseline response (e.g., random guessing), and outputs the evaluation
results. If possible, baseline systems should be written in widely used
programming languages. We also encourage the use of standards such as UIMA.

Umbrella tasks
To reduce fragmentation of similar tasks, we will encourage task
organisers to propose larger tasks that include several subtasks. For
example, Word Sense Induction in Japanese and Word Sense Induction in
English could be combined into a single umbrella task that includes
several subtasks. We welcome task proposals for such larger tasks. In
addition, the program committee will actively encourage task organisers
proposing similar tasks to combine their efforts into larger umbrella
tasks.

Application-oriented tasks
We welcome tasks that are devoted to developing novel applications of
computational semantics. As an analogy, the TREC Question-Answering (QA)
track was solely devoted to building QA systems to compete with current
IR systems. Similarly, we will encourage tasks that have a clearly defined
end-user application showcasing and are enhancing our understanding of
computational semantics, as well as extending the current state-of-the-art.


IMPORTANT DATES

SemEval-2014

Task proposals due         September 15, 2013
Tasks chosen/merged        September 25, 2013
Trial data ready           October 30, 2013 [TBC]
Training data ready        December 15, 2013 [TBC]
Test data ready            March 10, 2014 [TBC]
Evaluation start           March 15, 2014 [TBC]
Evaluation end             March 30, 2014 [TBC]
Paper submission due       April 30, 2014 [TBC]
Paper reviews due          May 30, 2014 [TBC]
Camera ready due           June 30, 2014 [TBC]
SemEval workshop           August 23-30, 2014 [TBC]

The SemEval-2014 Workshop will most likely be co-located with COLING.

SemEval-2015

The SemEval-2015 Workshop will be co-located with a major CL conference.
 A detailed schedule will be communicated soon, but we already welcome
short statements of interest if you want to organize a task.


SUBMISSION DETAILS

The task proposals should ideally contain the following:
- A summary description of the task (maximum 1 page)
- How the training/testing data will be built and/or procured
- The evaluation methodology to be used, including clear evaluation
  criteria
- The anticipated availability of the necessary resources to the
  participants (copyright, etc.)
- The resources required to prepare the task (computation and annotation
  time, costs of annotations, etc)
If you are not yet at a point to provide outlines of all of these, that
is acceptable, but please give some thought to each, and present a sketch
of your ideas. We will gladly give feedback.

Please submit proposals as soon as possible, preferably by electronic mail
in plain ASCII text to the SemEval email address:

semeval-organizers at googlegroups.com

CHAIRS
Preslav Nakov, Qatar Computing Research Institute
Torsten Zesch, University of Duisburg-Essen, Germany

SemEval DISCUSSION GROUP
Please join our discussion group at
semeval3 at googlegroups.com
in order to receive announcements and participate in discussions.

The SemEval-2014 and SemEval-2015 Websites:
http://alt.qcri.org/semeval2014/
http://alt.qcri.org/semeval2015/

_______________________________________________
UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora



More information about the Corpora mailing list