21.103, Calls: Computational Ling, Text/Corpus Ling, Pragmatics/Malta

linguist at LINGUISTLIST.ORG linguist at LINGUISTLIST.ORG
Fri Jan 8 17:14:45 UTC 2010


LINGUIST List: Vol-21-103. Fri Jan 08 2010. ISSN: 1068 - 4875.

Subject: 21.103, Calls: Computational Ling, Text/Corpus Ling, Pragmatics/Malta

Moderators: Anthony Aristar, Eastern Michigan U <aristar at linguistlist.org>
            Helen Aristar-Dry, Eastern Michigan U <hdry at linguistlist.org>
 
Reviews: Monica Macaulay, U of Wisconsin-Madison  
Eric Raimy, U of Wisconsin-Madison  
Joseph Salmons, U of Wisconsin-Madison  
Anja Wanner, U of Wisconsin-Madison  
       <reviews at linguistlist.org> 

Homepage: http://linguistlist.org/

The LINGUIST List is funded by Eastern Michigan University, 
and donations from subscribers and publishers.

Editor for this issue: Kate Wu <kate at linguistlist.org>
================================================================  

LINGUIST is pleased to announce the launch of an exciting new feature:  
Easy Abstracts! Easy Abs is a free abstract submission and review facility 
designed to help conference organizers and reviewers accept and process 
abstracts online.  Just go to: http://www.linguistlist.org/confcustom, 
and begin your conference customization process today! With Easy Abstracts, 
submission and review will be as easy as 1-2-3!

===========================Directory==============================  

1)
Date: 05-Jan-2010
From: Patrizia Paggio < paggio at hum.ku.dk >
Subject: Workshop on Multimodal Corpora
 

	
-------------------------Message 1 ---------------------------------- 
Date: Fri, 08 Jan 2010 12:12:02
From: Patrizia Paggio [paggio at hum.ku.dk]
Subject: Workshop on Multimodal Corpora

E-mail this message to a friend:
http://linguistlist.org/issues/emailmessage/verification.cfm?iss=21-103.html&submissionid=2233319&topicid=3&msgnumber=1
  

Full Title: Workshop on Multimodal Corpora 

Date: 18-May-2010 - 18-May-2010
Location: Valletta, Malta 
Contact Person: Michael Kipp
Meeting Email: mich.kipp at googlemail.com
Web Site: http://www.multimodal-corpora.org 

Linguistic Field(s): Computational Linguistics; Pragmatics; Text/Corpus Linguistics 

Call Deadline: 12-Feb-2010 

Meeting Description:

LREC 2010 Workshop on Multimodal Corpora: Advances in Capturing, Coding and
Analyzing Multimodality 

1st Call for Papers

LREC 2010 Workshop on Multimodal Corpora: Advances in Capturing, Coding and
Analyzing Multimodality
18 May 2010, Malta 
http://www.multimodal-corpora.org

A "Multimodal Corpus" involves the recording, annotation and analysis of several
communication modalities such as speech, hand gesture, facial expression, body
posture, etc. As many research areas are moving from focused but single modality
research to fully-fledged multimodality research, multimodal corpora are
becoming a core research asset and an opportunity for interdisciplinary exchange
of ideas, concepts and data. 

This workshop follows similar events held at LREC 00, 02, 04, 06, 08. There is
an increasing interest in multimodal communication and multimodal corpora as
visible by European Networks of Excellence and integrated projects such as
HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet. Furthermore, the success of
recent conferences and workshops dedicated to multimodal communication
(ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium on Multimodal Communication,
Embodied Language Processing) and the creation of the Journal of Multimodal User
Interfaces also testify to the growing interest in this area, and the general
need for data on multimodal behaviours.

The 2010 full-day workshop is planned to result in a significant follow-up
publication, similar to previous post-workshop publications like the 2008
special issue of the Journal of Language Resources and Evaluation and the 2009
state-of-the-art book published by Springer.

AIMS
In 2010, we are aiming for a wide cross-section of the field, with contributions
on collection efforts, coding, validation and analysis methods, as well as
actual tools and applications of multimodal corpora. However, we want to put
emphasis on the fact that there have been significant advances in capture
technology that make highly accurate data available to the broader research
community. Examples are the tracking of face, gaze, hands, body and the
recording of articulated full-body motion using motion capture. These data are
much more accurate and complete than simple videos that are traditionally used
in the field and therefore, will have a lasting impact on multimodality
research. However, the richness of the signals and the complexity of the
recording process urgently call for an exchange of state-of-the-art information
regarding recording and coding practices, new visualization and coding tools,
advances in automatic coding and analyzing corpora. 

Topics
This LREC 2010 workshop on multimodal corpora will feature a special session on
databases of motion capture, trackers, inertial sensors, biometric devices and
image processing. Other topics to be addressed include, but are not limited to:  
- Multimodal corpus collection activities (e.g. direction-giving dialogues,
emotional behaviour, human-avatar interaction, human-robot interaction, etc.)
and descriptions of existing multimodal resources
- Relations between modalities in natural (human) interaction and in
human-computer interaction
- Multimodal interaction in specific scenarios, e.g. group interaction in meetings
- Coding schemes for the annotation of multimodal corpora
- Evaluation and validation of multimodal annotations
- Methods, tools, and best practices for the acquisition, creation, management,
access, distribution, and use of multimedia and multimodal corpora
- Interoperability between multimodal annotation tools (exchange formats,
conversion tools, standardization)
- Collaborative coding
- Metadata descriptions of multimodal corpora
- Automatic annotation, based e.g. on motion capture or image processing, and
the integration with manual annotations 
- Corpus-based design of multimodal and multimedia systems, in particular
systems that involve human-like modalities either in input (Virtual Reality,
motion capture, etc.) and output (virtual characters) 
- Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze,
gesture, facial expressions)
- Machine learning applied to multimodal data
- Multimodal dialogue modelling

Important Dates
- Deadline for paper submission (complete paper):  12 February 2010
- Notification of acceptance: 0  March
- Final version of accepted paper: 26 March
- Final program: 7 April
- Final proceedings: 14 April
- Workshop: 18 May 

Submissions
The workshop will consist primarily of paper presentations and
discussion/working sessions. Submissions should be 4 pages long, must be in
English, and follow the submission guidelines available under
http://multimodal-corpora.org/mmc10.html

Submit your paper here: https://www.softconf.com/lrec2010/MMC2010

Demonstrations of multimodal corpora and related tools are encouraged as well (a
demonstration outline of 2 pages can be submitted).

LREC-2010 Map of Language Resources, Technologies and Evaluation

When submitting a paper through the START page, authors will be kindly asked to
provide relevant information about the resources that have been used for the
work described in their paper or that are the outcome of their research. For
further information on this new initiative, please refer to 
http://www.lrec-conf.org/lrec2010/?LREC2010-Map-of-Language-Resources

Organising Committee
Michael Kipp, DFKI, Germany
Jean-Claude Martin, LIMSI-CNRS, France
Patrizia Paggio, University of Copenhagen, Denmark
Dirk Heylen, University of Twente, The Netherlands





-----------------------------------------------------------
LINGUIST List: Vol-21-103	

	



More information about the LINGUIST mailing list