27.5, Calls: Computational Ling, Text/Corpus Ling, Translation/Slovenia

The LINGUIST List via LINGUIST linguist at listserv.linguistlist.org
Mon Jan 4 17:18:37 UTC 2016


LINGUIST List: Vol-27-5. Mon Jan 04 2016. ISSN: 1069 - 4875.

Subject: 27.5, Calls: Computational Ling, Text/Corpus Ling, Translation/Slovenia

Moderators: linguist at linguistlist.org (Damir Cavar, Malgorzata E. Cavar)
Reviews: reviews at linguistlist.org (Anthony Aristar, Helen Aristar-Dry, Sara Couture)
Homepage: http://linguistlist.org

*****************    LINGUIST List Support    *****************
                   25 years of LINGUIST List!
Please support the LL editors and operation with a donation at:
           http://funddrive.linguistlist.org/donate/

Editor for this issue: Anna White <awhite at linguistlist.org>
================================================================


Date: Mon, 04 Jan 2016 12:18:27
From: Georg Rehm [georg.rehm at dfki.de]
Subject: LREC 2016 Workshop: Translation Evaluation

 
Full Title: LREC 2016 Workshop: Translation Evaluation 
Short Title: MTEVAL2016 

Date: 24-May-2016 - 24-May-2016
Location: Portorož, Slovenia 
Contact Person: Georg Rehm
Meeting Email: georg.rehm at dfki.de
Web Site: http://www.cracking-the-language-barrier.eu/mt-eval-workshop-2016/ 

Linguistic Field(s): Computational Linguistics; Text/Corpus Linguistics; Translation 

Call Deadline: 15-Feb-2016 

Meeting Description:

Current MT/HT evaluation approaches, both automatic and manual, are characterised by a high degree of fragmentation, heterogeneity and a lack of interoperability between tools and data sets. As a consequence, it is difficult to reproduce, interpret, and compare evaluation results. The main objective of this workshop is to bring together researchers working on MT and HT evaluation, including providers and users of tools or evaluation approaches (including metrics and methodologies) as well as practitioners (translators, users of MT, LSPs etc.).

This workshop takes an in-depth look at an area of ever-increasing importance: approaches, tools and data support for the evaluation of human translation (HT) and machine translation (MT), with a focus on MT. Two clear trends have emerged over the past several years. The first trend involves standardising evaluations in research through large shared tasks in which actual translations are compared to reference translations using automatic metrics and/or human ranking. The second trend focuses on achieving high quality translations with the help of increasingly complex data sets that contain many levels of annotation based on sophisticated quality metrics – often organised in the context of smaller shared tasks. In industry, we also observe an increased interest in workflows for high quality outbound translation that combine Translation Memory (TM)/Machine Translation and post-editing. In stark contrast to this trend to quality translation (QT) and its inherent overall approach and c
 omplexity, the data and tooling landscapes remain rather heterogeneous, uncoordinated and not interoperable. 

The event will bring together MT and HT researchers, users and providers of tools, and users and providers of manual and automatic evaluation methodologies currently used for the purpose of evaluating HT and MT systems. The key objective of the workshop is to initiate a dialogue and discuss whether the current approach involving a diverse and heterogeneous set of data, tools and evaluation methodologies is appropriate enough or if the community should, instead, collaborate towards building an integrated ecosystem that provides better and more sustainable access to data sets, evaluation workflows, approaches and metrics and supporting processes such as annotation, ranking and so on. 

The workshop is meant to stimulate a dialogue about the commonalities, similarities and differences of the existing solutions in the three areas (1) tools, (2) methodologies, (3) data sets. A key question concerns the high level of flexibility and lack of interoperability of heterogeneous approaches, while a homogeneous approach would provide less flexibility but higher interoperability. How much flexibility and interoperability does the MT/HT research community need? How much does it want?

Call for Papers: 

Topics of interest include but are not limited to:

- MT/HT evaluation methodologies (incl. scoring mechanisms, integrated metrics)
- Benchmarks for MT evaluation
- Data and annotation formats for the evaluation of MT/HT
- Workbenches, tools, technologies for the evaluation of MT/HT (incl. specialised workflows)
- Integration of MT/TM, and terminology in industrial evaluation scenarios
- Evaluation ecosystems
- Annotation concepts such as MQM, DQF and their implementation in MT evaluation processes

We invite contributions on the topics mentioned above and any related topics of interest.

Important Dates:

- Publication of the call for papers: 10 December 2015
- Submissions due: 15 February 2016
- Notification of acceptance: 1 March 2016
- Final version of accepted papers: 31 March 2016
- Final programme and online proceedings: 15 April 2016
- Workshop: 24 May 2016 (this event will be a full-day workshop)

Submission:

Please submit your papers at https://www.softconf.com/lrec2016/MTEVAL/ before the deadline of 15 February 2016. 

http://www.cracking-the-language-barrier.eu/mt-eval-workshop-2016/

This workshop is a joint activity of the EU projects QT21 and CRACKER.




------------------------------------------------------------------------------

*****************    LINGUIST List Support    *****************
          2015 end of year and holiday season fund drive:
Please support the LL editors and operation with a donation at:
            http://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-27-5	
----------------------------------------------------------







More information about the LINGUIST mailing list