17.2289, Calls: Computational Linguistics

linguist at LINGUISTLIST.ORG linguist at LINGUISTLIST.ORG
Thu Aug 10 17:00:54 UTC 2006


LINGUIST List: Vol-17-2289. Thu Aug 10 2006. ISSN: 1068 - 4875.

Subject: 17.2289, Calls: Computational Linguistics

Moderators: Anthony Aristar, Eastern Michigan U <aristar at linguistlist.org>
            Helen Aristar-Dry, Eastern Michigan U <hdry at linguistlist.org>
 
Reviews: Laura Welcher, Rosetta Project / Long Now Foundation  
         <reviews at linguistlist.org> 

Homepage: http://linguistlist.org/

The LINGUIST List is funded by Eastern Michigan University, Wayne
State University, and donations from subscribers and publishers.

Editor for this issue: Hannah Morales <hannah at linguistlist.org>
================================================================  

As a matter of policy, LINGUIST discourages the use of abbreviations
or acronyms in conference announcements unless they are explained in
the text.

To post to LINGUIST, use our convenient web form at 
http://linguistlist.org/LL/posttolinguist.html. 



===========================Directory==============================  

1)
Date: 09-Aug-2006
From: Patrick Paroubek < pap at limsi.fr >
Subject: Traitement Automatique des Langues 

	
-------------------------Message 1 ---------------------------------- 
Date: Thu, 10 Aug 2006 12:58:22
From: Patrick Paroubek < pap at limsi.fr >
Subject: Traitement Automatique des Langues 
 


Full Title: Traitement Automatique des Langues 


Linguistic Field(s): Computational Linguistics; General Linguistics 

Call Deadline: 20-Nov-2006 

Principles of Evaluation in Natural Language Processing.

Special Issue of the Journal 'Traitement Automatique des Langues' (TAL)

Deadline for submission: 20th November 2006

Redacteurs Invites: 

Guest Editors: Patrick Paroubek (LIMSI-CNRS) 
(to be completed)

Preliminary Announcement

Just by looking at the number of publications about evaluation, it is clear
that the usage of evaluation methods has become from being only a
controversial topic to an undeniable fact. In Natural Language Processing
(NLP), such methods are used now throughout System Development Life Cycle,
for comparing different approaches about a given problem or even during
corpus development and maintenance.  The use of evaluation methods is not
anymore restricted to a small community as it was for some time. Such
practices are now encountered widely in processing of text, speech and even
multimedia data. In addition to a special international conference about
evaluation in NLP: LREC, there exists in France a national program
'Technolangue' which transcends scientific barriers, since it has
prolongations in artificial vision with the 'Technovision' program, where
we find evaluation activities about handwriting recognition and ancient
document processing.

For this special issue of TAL, we invite papers about the fundamental
principles that underlie the use of evaluation methods in NLP. We wish to
adopt a higher point of view which goes beyond the horizon of a single
evaluation campaign and have a more global approach about the problems
raised by the deployment of evaluation in NLP. Without any prejudice, we do
not wish to offer with this special issue yet  another tribune to articles
relating the participation of a system in a given evaluation campaign, or
articles comparing the pros and cons of two metrics for assessing
performance in a particular task. Our intent is to address more fundamental
issues about the use of evaluation in NLP.

Topics

Specific topics include (but are not limited to):

1) Corpora in the evaluation process, their use, the development life
cycle, the synergy around the pair corpus - evaluation campaign.

2) Which formalisms for evaluation in NLP?

3) Comparative or Quantitative evaluation for NLP?

4) Technology Evaluation versus User/Application Oriented Evaluation--
which is more appropriate for NLP?

5) To evaluate implies to have a reference against which to gauge a
performance, but how is defined the reference in NLP? How should we deal
with the problem that often it is not unique (e.g. in machine translation)?

6) Evaluation as a source of creation of linguistic resources. What help
does it bring in maintaining existing resources?

7) Evaluation and scientific progress, e.g. large scale evaluation programs
in NLP.

8) Which role does evaluation play in the NLP scientific process.

9) Some domains of NLP are reputed easier for evaluation than others
(parsing, semantics, translation) myth or reality?

The Journal

(see http://www.atala.org/) 
The journal TAL (Traitement Automatique des Langues) is an international
journal published since 1960 by ATALA (Association pour le Traitement
Automatique des Langues) with the help of CNRS. It is now becoming
available in electronic form, with print on demand. The reading and
selection process remain unchanged.

Language

Articles are written in French or in English. Submissions in English are
only accepted for non-native speakers of French.

Important Dates

Submission Deadline: 20/11/2006
Acceptance Notification: 22/01/2007
Revised Final Version: 16/04/2007

Format

Articles (25 pages maximum, PDF format) will be sent to: Patrick Paroubek
pap at limsi.fr
Style sheets are available on line at: http://tal.e-revues.com/appel.jsp




-----------------------------------------------------------
LINGUIST List: Vol-17-2289	

	



More information about the LINGUIST mailing list