31.2341, Calls: Comp Ling, Text/Corpus Ling/Ireland

The LINGUIST List linguist at listserv.linguistlist.org
Wed Jul 22 16:04:41 UTC 2020


LINGUIST List: Vol-31-2341. Wed Jul 22 2020. ISSN: 1069 - 4875.

Subject: 31.2341, Calls: Comp Ling, Text/Corpus Ling/Ireland

Moderator: Malgorzata E. Cavar (linguist at linguistlist.org)
Student Moderator: Jeremy Coburn
Managing Editor: Becca Morris
Team: Helen Aristar-Dry, Everett Green, Sarah Robinson, Lauren Perkins, Nils Hjortnaes, Yiwen Zhang, Joshua Sims
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Please support the LL editors and operation with a donation at:
           https://funddrive.linguistlist.org/donate/

Editor for this issue: Lauren Perkins <lauren at linguistlist.org>
================================================================


Date: Wed, 22 Jul 2020 12:03:42
From: Emiel van Miltenburg [c.w.j.vanmiltenburg at tilburguniversity.edu]
Subject: Workshop on Evaluating NLG Evaluation

 
Full Title: Workshop on Evaluating NLG Evaluation 
Short Title: EvalNLGEval 

Date: 18-Dec-2020 - 18-Dec-2020
Location: Dublin, Ireland 
Contact Person: Emiel van Miltenburg
Meeting Email: evalnlg.inlg at gmail.com
Web Site: https://evalnlg-workshop.github.io/ 

Linguistic Field(s): Computational Linguistics; Text/Corpus Linguistics 

Call Deadline: 20-Sep-2020 

Meeting Description:

This workshop is intended as a discussion platform on the status and the
future of the evaluation of Natural Language Generation systems. Among other
topics, we will discuss current evaluation quality, human versus automated
metrics, and the development of shared tasks for NLG evaluation. The workshop
also involves an 'unshared task', where participants are invited to experiment
with evaluation data from earlier shared tasks.


Call for papers: 

Important Dates:
Call for workshop papers or abstracts - July 20, 2020
Submissions due - September 20, 2020
Notification of acceptance - October 20, 2020
Camera ready papers due - November 20, 2020
Workshop: December 18, 2020

Workshop overview:
This workshop is intended as a discussion platform on the status and the
future of the evaluation of Natural Language Generation systems. Among other
topics, we will discuss current evaluation quality, human versus automated
metrics, and the development of shared tasks for NLG evaluation. The workshop
also involves an 'unshared task', where participants are invited to experiment
with evaluation data from earlier shared tasks.

Papers:
We encourage a range of papers ranging from commentary and meta-evaluation of
existing evaluation strategies to the suggestion of new metrics. We
specifically place emphasis on the methodology and linguistic aspects of
evaluation. We invite papers on any topic related to the evaluation of NLG
systems, including (but not limited to):
- Qualitative studies, definitions of evaluation metrics (e.g., readability,
fluency, semantic correctness)
- Crowdsourcing Strategies, qualitative tests for crowdsourcing (How to
elucidate evaluation metrics?)
- Looking at individual differences and cognitive biases in human evaluation
(expert vs. non-expert, L1 vs L2 speakers)
- Best practices for system evaluations (How does your lab choose models?)
- Qualitative study/error analysis approaches
- Demo: Systems that make the evaluation easier
- Comparison of metrics across different NLG tasks (captioning, data2text,
story generation, summarization…) or different languages (with a focus on
low-resource languages)
- Evaluation surveys
- Position papers and commentary on trends in evaluation

We encourage the submission of ''task proposals'', where authors can propose
shared tasks for next year's edition of the workshop.

Unshared Task:
This year's edition also features an unshared task: rather than working
towards a specific goal, we encourage participants to use a specific
collection of datasets, for any evaluation-related goal. For example:
comparing a new evaluation method with existing ratings, or carrying out a
subset analysis. This allows us to put the results from previous shared tasks
in perspective, and helps us develop better evaluation metrics for future
shared tasks. Working on the same datasets allows for more focused discussions
at the workshop.

Datasets for this year's edition are existing datasets with system outputs and
human ratings. Participants may use any of these for their unshared task
submission:
E2E NLG Challenge (http://www.macs.hw.ac.uk/InteractionLab/E2E/)
WebNLG Challenge 2017 (https://webnlg-challenge.loria.fr/challenge_2017/)
Surface Realization Shared Task (SRST) 2019
(http://taln.upf.edu/pages/msr2019-ws/SRST.html)

Submission Formats: 
 - Archival papers (up to 8 pages excluding references; shorter submissions
are also welcome)
 - Non-archival abstract of papers within the topic accepted somewhere else or
under submission at the main INLG 2020 (1-2 pages)
 - Demo papers (1-2 pages)
Please visit https://www.inlg2020.org/papers for submission instructions




------------------------------------------------------------------------------

***************************    LINGUIST List Support    ***************************
 The 2019 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
  to find out how to donate and check how your university, country or discipline
     ranks in the fund drive challenges. Or go directly to the donation site:
               https://iufoundation.fundly.com/the-linguist-list-2019

                        Let's make this a short fund drive!
                Please feel free to share the link to our campaign:
                    https://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-31-2341	
----------------------------------------------------------






More information about the LINGUIST mailing list