[Corpora-List] LREC 2004 Workshop on Multimodal Corpora : deadline extended to JANUARY 31st
Jean-Claude MARTIN
Jean-Claude.Martin at limsi.fr
Fri Jan 16 07:54:44 UTC 2004
Due to a number of requests, we extended the deadline by one week to
JANUARY 31st, 2004!
____________________________________________________________________
This message is posted to several lists.
We apologize if you receive multiple copies.
Please forward it to everyone who might be interested.
_____________________________________________________________________
**********************************************
SECOND AND FINAL CALL FOR PAPERS
Workshop on
MULTIMODAL CORPORA:
MODELS OF HUMAN BEHAVIOUR
FOR THE SPECIFICATION AND EVALUATION
OF MULTIMODAL INPUT AND OUTPUT INTERFACES
http://lubitsch.lili.uni-bielefeld.de/MMCORPORA/
Centro Cultural de Belem, LISBON, Portugal, 25th may 2004
**********************************************
In Association with
4th INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
LREC2004 http://www.lrec-conf.org/lrec2004/index.php Main conference
26-27-28 May 2004
MOTIVATIONS
-------------------------
The primary purpose of this one day workshop is to share information
and engage in the collective planning for the future creation of usable
pluridisciplinary multimodal resources.
It will focus on the following issues regarding multimodal corpora:
how researchers build models of human behaviour out of the annotations
of video corpora,
how they use such knowledge for the specification of multimodal input
(e.g. merging users' gestures and speech )
and output (e.g. specification of believable and emotional behaviour in
Embodied Conversational Agents) in human computer interfaces,
and finally how they evaluate multimodal systems (e.g. full system
evaluation and glass box evaluation of individual
system components).
Topics to be addressed in the workshop include, but are not limited to:
* Models of human multimodal behaviour in various disciplines
* Integrating different sources of knowledge (literature in
socio-linguistics, corpora annotation)
* Specifications of coding schemes for annotation of multimodal video
corpora
* Parallel multimodal corpora for different languages
* Methods, tools, and best practice procedures for the acquisition,
creation, management, access, distribution, and use of multimedia and
multimodal corpora
* Methods for the extraction and acquisition of knowledge (e.g. lexical
information, modality modelling) from multimedia and multimodal corpora
* Ontological aspects of the creation and use of multimodal corpora
* Machine learning for and from multimedia (i.e., text, audio, video),
multimodal (visual, auditory, tactile), and multicodal (language,
graphics, gesture) communication
* Exploitation of multimodal corpora in different types of applications
(information extraction, information retrieval, meeting transcription,
multisensorial interfaces,
translation, summarisation, www services, etc.)
* Multimedia and multimodal metadata descriptions of corpora
* Applications enabled by multimedia and multimodal corpora
* Benchmarking of systems and products; use of multimodal corpora for
the evaluation of real systems
* Processing and evaluation of mixed spoken, typed, and cursive (e.g.,
pen) language processing
* Automated multimodal fusion and/or generation (e.g., coordinated
speech, gaze, gesture, facial expressions)
* Techniques for combining objective and subjective evaluations, and
for making evaluations cost-effective, predictive and fast
The output of the workshop will be the following:
* Better knowledge of the potential of major models of human multimodal
behaviour
* Challenging issues in the usability of multimodal corpora
* Fostering of a pluridisciplinary community of multimodal researchers
and multimodal interface developers
RATIONALE
-------------------------
Multimodal resources feature the recording and annotation of several
communication modalities
such as speech, hand gesture, facial expression, body posture, graphics.
Several researchers have been developing such multimodal resources for
several years,
often with a focus on a limited set of modalities or on a given
application domain.
A number of projects, initiatives and organisations have addressed
multimodal resources with a federative approach:
* At LREC2002, a workshop had addressed the issue of "Multimodal
Resources and Multimodal Systems Evaluation"
http://www.limsi.fr/Individu/martin/wslrec2002/MMWorkshopReport.doc
* At LREC2000, a 1st workshop had addressed the issue of multimodal
corpora, focussing on meta-descriptions and large corpora
http://www.mpi.nl/world/ISLE/events/LREC%202000/LREC2000.htm
* The European 6th Framework program (FP6), started in 2003, includes
multilingual and multisensorial
communication as one of the major R&D issue, and the evaluation of
technologies appears as a specific
item in the Integrated Project instrument presentation
http://www.cordis.lu/ist/so/interfaces/home.html
* NIMM was a work group on Natural Interaction and MultiModality which
ran under the IST-ISLE project
(http://isle.nis.sdu.dk/). In 2001, NIMM compiled a survey of existing
multimodal resources
(more than 60 corpora are described in the survey), coding schemes and
annotation tools.
The ISLE project was developed both in Europe and in the USA
(http://www.ldc.upenn.edu/sb/isle.html)
* EcorporaA (European Language Resources Association) launched in
November 2001 a
survey about multimodal corpora including marketing aspects
(http://www.icp.inpg.fr/EcorporaA/).
* A Working Group at the Dagstuhl Seminar on Multimodality recorded, in
November 2001,
28 questionnaires from researchers on multimodality, from which 21
have been announcing their
attention to record other multimodal corpora in the future.
(http://www.dfki.de/~wahlster/Dagstuhl_Multi_Modality/)
* Other surveys have been recently made about multimodal annotation
coding schemes and tools (COCOSDA, LDC, MITRE).
Yet, existing annotation of multimodal corpora until now have been made
mostly on an individual basis,
each researcher or team focusing on its own needs and knowledge about
modality specific coding schemes
or application examples.
Thus, there is a lack of real common knowledge and understanding of how
to proceed from annotations
to usable models of human multimodal behaviour and how to use such
knowledge
for the design and evaluation of multimodal input and embodied
conversational agent interfaces.
Furthermore, the evaluation of multimodal interaction poses different
(and very complex) problems than the evaluation of monomodal speech
interfaces or
WYSIWYG direct interaction interfaces.
There are a number of recently finished and ongoing projects in the
field of multimodal interaction
in which attempts have been made to evaluate the quality of the
interfaces in all meanings
that can be attached to the term 'quality'.
There is a widely felt need in the field for exchanging information on
multimodal
interaction evaluation with researchers in other projects.
One of the major outcomes of this workshop should be better
understanding of
the extent to which evaluation procedures developed in one project
generalise to other, somewhat related projects.
IMPORTANT DATES
-------------------------
* 31st January 2004: Deadline for paper submission
* 29 February 2004: Acceptance notifications and preliminary program
* 21 March 2004: Deadline final version of accepted papers
* 25 May 2004: Workshop
SUBMISSIONS
---------------
The workshop will consist primarily of paper presentations and
discussion/working sessions.
Submissions should be 4 pages long, must be in English, and follow the
submission guidelines at http://lubitsch.lili.uni-bielefeld.de/MMCORPORA
Demonstrations of multimodal corpora and related tools are encouraged as
well (a demonstration outline of 2 pages can be submitted).
As soon as possible, authors are encouraged to send to
lrec at limsi.u-psud.fr
a brief email indicating their intention to participate, including their
contact information and
the topic they intend to address in their submissions.
Proceedings of the workshop will be printed by the LREC Local Organising
Committee.
The organisers might consider a special issue of a suitable Journal for
selected publications from the workshop.
TIME SCHEDULE AND REGISTRATION FEE
--------------------------------------------------
The workshop will consist of a morning session and an afternoon session,
with a focus on the use of multimodal corpora for building models of
human behaviour and
specifying/evaluating multimodal input and output Human-Computer
Interfaces.
There will also be time slots for collective discussion and one coffee
break in the morning and in the afternoon.
For this full-day Workshop, the registration fee is 100 EURO for LREC
Conference participants
and 170 EURO for other participants. These fees will include coffee
breaks and the Proceedings of the Workshop.
ORGANISING COMMITEE
--------------------------------------------------
Jean-Claude MARTIN, LIMSI-CNRS, martin at limsi.u-psud.fr
Elisabeth Den OS, MPI, Els.denOs at mpi.nl
Peter KÜHNLEIN, Univ. Bielefeld, p at uni-bielefeld.de
Lou BOVES, L.Boves at let.kun.nl
Patrizia PAGGIO, CST, patrizia at cst.dk
Roberta CATIZONE, Sheffield, roberta at dcs.shef.ac.uk
PRELIMINARY PROGRAM COMMITEE
--------------------------------------------------
Elisabeth AHLSÉN
Jens ALLWOOD
Elisabeth ANDRE
Niels Ole BERNSEN
Lou BOVES
Stéphanie BUISINE
Roberta CATIZONE
Loredana CERRATO
Piero COSI
Elisabeth Den OS
Jan Peter DE RUITER
Laila DYBKJÆR
David HOROWITZ
Bart JONGEJAN
Alfred KRANSTEDT
Steven KRAUWER
Peter KÜHNLEIN
Knut KVALE
Myriam LAMOLLE
Jean-Claude MARTIN
Joseph MARIANI
Jan-Torsten MILDE
Sharon OVIATT
Patrizia PAGGIO
Catherine PELACHAUD
Janienke STURM
**********************************************
More information about the Corpora
mailing list