Conf: Machine Translation Evaluation: LREC 2002

Alexis Nasr alexis.nasr at linguist.jussieu.fr
Fri Dec 21 16:54:33 UTC 2001


  ( Apologies if you have already seen an announcement for this  workshop )



              Machine Translation Evaluation: Human
                Evaluators Meet Automated Metrics

                          27 May 2002

                 A hands-on evaluation workshop at
                 LREC 2002 (29 May - 2 June 2002)
                    Las Palmas, Canary Islands


                Call for interest and participation

---------------------------------------------------------------

                             Important dates

Registration with the workshop organizers: February 20th, 2002

Registration with LREC 2002 organizers:
   please check the Conference's webpage at: http://www.lrec-conf.org/lrec2002/

Distribution of pre-workshop material: March 2002

Workshop: 27 May 2002
          09:00 to 13:00 morning session
          14:30 to 18:30 afternoon session
-------------------------------------------------------------


                           Preliminary Schedule
Morning
 introduction and welcome
 background on workshop theme
 integration of evaluation exercises (start)
Afternoon
 integration of evaluation exercise (continue)
 reports
 cross-evaluation analysis
 final wrap-up

-------------------------------------------------------------

Background

The Evaluation Working group of the ISLE project has organised a
series of workshops on MT evaluation. Each of these workshops has
contained a practical component, where participants have been asked to
carry out exercises involving MT evaluation. These workshops
proved to be very illuminating, and have stimulated on-going work
in the area, much of it reported in the latest workshop in the
series, held at the MT Summit meeting in September of 2001.

Results from previous workshops can be consulted at
http://www.issco.unige.ch/projects/isle/ewg.html,
and the proceedings from the MT Summit in Santiago de Compostela can
be requested from the organisers.

The workshop at LREC will continue the series, and will consist
primarily of hands-on exercises defined to investigate empirically a
small number of metrics proposed for evaluation of MT systems and the
potential relationships between them.

In an effort to develop a more systematic MT evaluation
methodology, recent work in the EAGLES and ISLE projects, funded by
the EU and NSF, has created a framework of characteristics in terms
of which MT evaluations and systems, past and future, can be described
and classified.
The resulting taxonomy can be consulted at:
  http://issco-www.unige.ch/projects/isle/taxonomy2/.

Previous workshops have led to critical analysis of measures drawn
from the literature, and to the creation of new measures. Of the
latter, several are aimed at eventual automation of the evaluation
task and/or at finding relatively simple and inexpensive measures
which correlate well with more complex measures that are hard
to automate or expensive to implement.

Given this background, the time has come to concentrate on
systematizing the actual evaluation measures themselves.  For any
particular measure, one would like to know how accurate it is, how
expensive and/or difficult to apply, how independent of other measures,
etc.  Very little of this type of information is available to date.

This workshop will focus on these issues. The organizers will provide
the participants in advance with the materials required to:
  - perform a small evaluation, using one or two measures
  - perform a cross-measure analysis of the resulting scores
  - create a general characterization of the measure's performance.

The participants will then apply these measures to the data made
available, and bring their results to the workshop in order to
integrate them with other participants' results.

The overall intention of the workshop is to discover, empirically,
what kinds of characteristics are easily determinable, and how
accurate they actually are.  Only through a process of assessing the
evaluations can we eventually arrive at a small but accurate set of
measures that adequately cover the set of phenomena MT system
evaluators, system developers, and potential MT users care about.

It is our hope that participants will feel inspired to continue
this process, so that the combined results can be assembled later,
integrated into the framework, and become a valuable resource to
anyone interested in MT evaluation.


-------------------------------------------------------------
Organizing Committee

Marianne Dabbadie
  EVALING, Paris, France
Tony Hartley
  Centre for Translation Studies, University of Leeds, UK
Eduard Hovy
  USC Information Sciences Institute, Marina del Rey, USA
Margaret King
  ISSCO/TIM/ETI, University of Geneva, Switzerland
Bente Maegaard
  Center for Sprogteknologi, Copenhagen, Denmark
Sandra Manzi
  ISSCO/TIM/ETI, University of Geneva, Switzerland
Keith J. Miller
  The MITRE Corporation, USA
Widad Mustafa El Hadi
  Université Lille III - Charles de Gaulle, France
Andrei Popescu-Belis
  ISSCO/TIM/ETI, University of Geneva, Switzerland
Florence Reeder
  The MITRE Corporation, USA
Michelle Vanni
  U.S. Department of Defense, USA


-------------------------------------------------------------

Intention to participate:

Participants wishing to receive preparatory data should send the
the following information to contact person below:
- name, address, email contact;
- experience in MT evaluation;
- languages known and level of comprehension (elementary, fair, good,
  near-native, native);

Contact:

Andrei Popescu-Belis:
Email(preferred): andrei.popescu-belis at issco.unige.ch
Fax: (41 22) 705 86 86
Regular mail:
  ISSCO/TIM/ETI, University of Geneva
  40, bd du Pont d'Arve
  CH-1211 Geneva 4 - SWITZERLAND

Cost of the Workshop:
  Main conference participants: 90 EURO
  Other participants: 140 EURO

Main conference and workshop site:
   Palacio de Congresos, Las Palmas, Canary Islands


Main conference web site:
  http://www.lrec-conf.org/lrec2002/
-------------------------------------------------------------------------
Message diffusé par la liste Langage Naturel <LN at cines.fr>
Informations, abonnement : http://www.biomath.jussieu.fr/LN/LN-F/
English version          : http://www.biomath.jussieu.fr/LN/LN/
Archives                 : http://listserv.linguistlist.org/archives/ln.html

La liste LN est parrainée par l'ATALA (Association pour le Traitement
Automatique des Langues)
Information et adhésion  : http://www.atala.org/
-------------------------------------------------------------------------



More information about the Ln mailing list