Conf: LREC 2002 workshop : Machine Translation Evaluation

Alexis Nasr alexis.nasr at LINGUIST.JUSSIEU.FR
Fri Mar 22 10:03:57 UTC 2002

Dear members of the LN list,

Please find below the call for participation for the
MT Evaluation workshop organized on May 27th, 2002 at
the LREC 2002 Conference, Canary Islands.

Please accept our apologies if you receive multiple
copies of this announcement. Thank you,
Andrei Popescu-Belis


              Machine Translation Evaluation: Human
                Evaluators Meet Automated Metrics

                          27 May 2002

                 A hands-on evaluation workshop at
                 LREC 2002 (27 May - 2 June 2002)
                    Las Palmas, Canary Islands

               Second call for interest and participation


                        Important dates

LREC 2002 advance registration deadline: March 29th, 2002

Please check the Conference's webpage at:

Distribution of pre-workshop material: April 2002

Workshop: May 27th, 2002
          09:00 to 13:00 morning session
          14:30 to 18:30 afternoon session

                           Preliminary Schedule

   introduction and welcome
   background on workshop theme
   integration of evaluation exercises (start)
   integration of evaluation exercise (continue)
   cross-evaluation analysis
   final wrap-up



The Evaluation Working Group of the ISLE project has organised a
series of workshops on MT evaluation. Each of these workshops has
contained a practical component, where participants have been asked to
carry out exercises involving MT evaluation. These workshops
proved to be very illuminating, and have stimulated ongoing work
in the area, much of it was reported in the latest workshop in the
series, held at the MT Summit meeting in September 2001.

Results from previous workshops can be consulted at,
and the proceedings from the MT Summit in Santiago de Compostela can
be requested from the organisers.

The workshop at LREC 2002 will continue the series, and will consist
primarily of hands-on exercises defined to investigate empirically a
small number of metrics proposed for evaluation of MT systems and the
potential relationships between them.

In an effort to develop a more systematic MT evaluation
methodology, recent work in the EAGLES and ISLE projects, funded by
the EU and NSF, has created a framework of characteristics in terms
of which MT evaluations and systems, past and future, can be described
and classified.  The resulting taxonomy can be consulted at:

Previous workshops have led to critical analysis of measures drawn
from the literature, and to the creation of new measures.  Of the
latter, several are aimed at eventual automation of the evaluation
task and/or at finding relatively simple and inexpensive measures
which correlate well with more complex measures that are hard
to automate or expensive to implement.

Given this background, the time has come to concentrate on
systematizing the actual evaluation measures themselves.  For any
particular measure, one would like to know how accurate it is, how
expensive and/or difficult to apply, how independent of other measures,
etc.  Very little of this type of information is available to date.

This workshop will focus on these issues.  The organizers will provide
the participants in advance with the materials required to:
  - perform a small evaluation, using one or two measures
  - perform a cross-measure analysis of the resulting scores
  - create a general characterization of the measure's performance.

The participants will then apply these measures to the data made
available, and bring their results to the workshop in order to
integrate them with other participants' results.

The overall intention of the workshop is to discover, empirically,
what kinds of characteristics are easily determinable, and how
accurate they actually are.  Only through a process of assessing the
evaluations can we eventually arrive at a small but accurate set of
measures that adequately cover the set of phenomena MT system
evaluators, system developers, and potential MT users care about.

It is our hope that participants will feel inspired to continue
this process, so that the combined results can be assembled later,
integrated into the framework, and become a valuable resource to
anyone interested in MT evaluation.

Organizing Committee

Marianne Dabbadie
  EVALING, Paris, France
Tony Hartley
  Centre for Translation Studies, University of Leeds, UK
Eduard Hovy
  USC Information Sciences Institute, Marina del Rey, USA
Margaret King
  ISSCO/TIM/ETI, University of Geneva, Switzerland
Bente Maegaard
  Center for Sprogteknologi, Copenhagen, Denmark
Sandra Manzi
  ISSCO/TIM/ETI, University of Geneva, Switzerland
Keith J. Miller
  The MITRE Corporation, USA
Widad Mustafa El Hadi
  Université Lille III - Charles de Gaulle, France
Andrei Popescu-Belis
  ISSCO/TIM/ETI, University of Geneva, Switzerland
Florence Reeder
  The MITRE Corporation, USA
Michelle Vanni
  U.S. Department of Defense, USA


Intention to participate:

Participants wishing to receive preparatory data should send the
the following information to contact person below:
- name, address, email contact;
- experience in MT evaluation;
- languages known and level of comprehension (elementary, fair,
  good, near-native, native);

   Andrei Popescu-Belis
   Email: andrei.popescu-belis at
   Fax:   (41 22) 705 86 89
   Regular mail:
   ISSCO/TIM/ETI, University of Geneva
   40, bd du Pont d'Arve
   CH-1211 Geneva 4 - SWITZERLAND

Cost of the Workshop:
   LREC 2002 participants: 90 EURO
   Other participants: 140 EURO

Registration forms are available on the LREC 2002 conference site:

Main conference and workshop site:
   Palacio de Congresos, Las Palmas, Canary Islands

Message diffusé par la liste Langage Naturel <LN at>
Informations, abonnement :
English version          :
Archives                 :

La liste LN est parrainée par l'ATALA (Association pour le Traitement
Automatique des Langues)
Information et adhésion  :

More information about the Ln mailing list