[Corpora-List] Call for participation: Shared Translation Task (WMT11)

Philipp Koehn pkoehn at inf.ed.ac.uk
Thu Jan 27 14:16:02 UTC 2011


EMNLP 2011
SIXTH WORKSHOP ON STATISTICAL MACHINE TRANSLATION

July 30-31, 2011, Edinburgh, UK

The translation task of this workshop focuses on European language pairs.
Translation quality will be evaluated on a shared, unseen test set of news
stories. We provide a parallel corpus as training data, a baseline system,
and additional resources for download. Participants may augment the
baseline system or use their own system.

GOALS

The goals of the shared translation task are:

    * To investigate the applicability of current MT techniques when
       translating into languages other than English
    * To examine special challenges in translating between European
       languages, including word order differences and morphology
    * To create publicly available corpora for machine translation and
       machine translation evaluation
    * To generate up-to-date performance numbers for European
       languages in order to provide a basis of comparison in future
       research
    * To offer newcomers a smooth start with hands-on experience
       in state-of-the-art statistical machine translation methods

We hope that both beginners and established research groups will
participate in this task.

TASK DESCRIPTION

We provide training data for four European language pairs, and a
common framework (including a baseline system). The task is to
improve methods current methods. This can be done in many ways.
For instance participants could try to:

    * improve word alignment quality, phrase extraction, phrase scoring
    * add new components to the open source software of the baseline system
    * augment the system otherwise (e.g. by preprocessing, reranking, etc.)
    * build an entirely new translation systems

Participants will use their systems to translate a test set of unseen sentences
in the source language. The translation quality is measured by a manual
evaluation and various automatic evaluation metrics. Participants agree to
contribute to the manual evaluation about eight hours of work.

You may participate in any or all of the following language pairs:

    * French-English
    * Spanish-English
    * German-English
    * Czech-English

For all language pairs we will test translation in both directions. To have
a common framework that allows for comparable results, and also to lower
the barrier to entry, we provide a common training set and baseline system.

We also strongly encourage your participation, if you use your own training
corpus, your own sentence alignment, your own language model, or your
own decoder.

If you use additional training data or existing translation systems, you must
flag that your system uses additional data. We will distinguish system
submissions that used the provided training data (constrained) from
submissions that used significant additional data resources. Note that basic
linguistic tools such as taggers, parsers, or morphological analyzers are
allowed in the constrained condition.

Your submission report should highlight in which ways your own methods
and data differ from the standard task. We may break down submitted
results in different tracks, based on what resources were used. We are
mostly interested in submission that are constraint to the provided training
data, so that the comparison is focused on the methods, not on the data
used. You may submit contrastive runs to demonstrate the benefit of
additional training data.

TEST SET SUBMISSION

To submit your results, please first convert into into SGML format as
required by the NIST BLEU scorer, and then upload it to the website
http://matrix.statmt.org/

EVALUATION

Evaluation will be done both automatically as well as by human judgment.

    * Manual Scoring: We will collect subjective judgments about translation
quality from human annotators. If you participate in the shared task, we ask
you to commit about 8 hours of time to do the manual evaluation. The
evaluation will be done with an online tool.

    * As in previous years, we expect the translated submissions to be in
recased, detokenized, XML format, just as in most other translation campaigns.

DATES

Release of training data: available on web site
Test set distributed for translation task: March 14, 2011
Submission deadline for translation task: March 18, 2011
Paper due date: Apr 22, 2011
Workshop: July 30-31, 2011

_______________________________________________
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora



More information about the Corpora mailing list