Conf: Workshop "Multimodal Corpora", LREC 2008
Thierry Hamon
thierry.hamon at LIPN.UNIV-PARIS13.FR
Mon May 5 10:47:01 UTC 2008
Date: Thu, 24 Apr 2008 10:18:21 +0200
From: Jean-Claude MARTIN <martin at limsi.fr>
Message-ID: <4810424D.3000202 at limsi.fr>
X-url: http://www.lrec-conf.org/lrec2008/
**************************************************************
Call For Participation
International Workshop on
MULTIMODAL CORPORA:
From Models of Natural Interaction to Systems and Applications
Tuesday, 27 May 2008
Full day workshop
Marrakech (Morocco)
http://www.lrec-conf.org/lrec2008/
***************************************************************
In Association with LREC2008
(the 6th International Conference on Language Resources and Evaluation)
http://www.lrec-conf.org/lrec2008/
Main conference: 28-29-30 May 2008
Palais des Congrès Mansour Eddahbi
Marrakech (Morocco)
-------------------------
DESCRIPTION
-------------------------
A 'Multimodal Corpus' targets the recording and annotation of several
communication modalities such as speech, hand gesture, facial
expression, body posture, etc. Theoretical issues are also addressed,
given their importance to the design of multimodal corpora.
This workshop continues the successful series of similar workshops at
LREC 00, 02, 04 and 06 also documented in a special issue of the
Journal of Language Resources and Evaluation due to come out in spring
2008. There is an increasing interest in multimodal communication and
multimodal corpora as visible by European Networks of Excellence and
integrated projects such as HUMAINE, SIMILAR, CHIL, AMI,
CALLAS. Furthermore, the success of recent conferences and workshops
dedicated to multimodal communication (ICMI, IVA, Gesture, PIT, Nordic
Symposia on Multimodal Communication, Embodied Language Processing)
and the creation of the Journal of Multimodal User Interfaces also
testifies to the growing interest in this area, and the general need
for data on multimodal behaviours.
The focus of this LREC'2008 workshop on multimodal corpora will be on
models of natural interaction and their contribution to the design of
multimodal systems and applications.
Topics to be addressed include, but are not limited to:
- Multimodal corpus collection activities (e.g. direction-giving
dialogues, emotional behaviour, human-avatar interaction,
human-robot interaction, etc.)
- Relations between modalities in natural (human) interaction and in
human-computer interaction
- Application of multimodal corpora to the design of multimodal and
multimedia systems
- Fully or semi-automatic multimodal annotation, using e.g. motion
capture and image processing, and its integration with manual
annotations
- Corpus-based design of systems that involve human-like modalities
either in input (Virtual Reality, motion capture, etc.) and output
(virtual characters)
- Multimodal interaction in specific scenarios, e.g. group interaction
in meetings
- Coding schemes for the annotation of multimodal corpora
- Evaluation and validation of multimodal annotations
- Methods, tools, and best practices for the acquisition, creation,
management, access, distribution, and use of multimedia and
multimodal corpora
- Interoperability between multimodal annotation tools (exchange
formats, conversion tools, standardization)
- Metadata descriptions of multimodal corpora
- Automated multimodal fusion and/or generation (e.g., coordinated
speech, gaze, gesture, facial expressions)
- Analysis methods tailored to multimodal corpora using
e.g. statistical measures or data mining.
We expect the output of this workshop to be:
1) deeper understanding of theoretical issues and research questions
related to verbal and non-verbal communication that multimodal corpora
should address,
2) larger consensus on how such corpora should be built in order to
provide useful and usable answers to research questions,
3) shared knowledge of how the corpora are contributing to multimodal
and multimedia system design, and
4) an updated view of state-of-the-art research on multimodal corpora.
--------------------------------------------------
TIME SCHEDULE AND REGISTRATION FEE
--------------------------------------------------
The workshop will consist of a morning session and an afternoon
session. There will be time for collective discussions.
For this full-day Workshop, registration is possible on site or via
the lrec web site: http://www.lrec-conf.org/lrec2008/
--------------------------------------------------
ORGANISING COMMITTEE
--------------------------------------------------
MARTIN Jean-Claude, LIMSI-CNRS, France
PAGGIO Patrizia, Univ. of Copenhagen, Denmark
KIPP Michael, DFKI, Saarbrücken, Germany
HEYLEN Dirk, Univ. Twente, The Netherlands
--------------------------------------------------
PROGRAMME COMMITTEE
--------------------------------------------------
Jan Alexandersson, D
Jens Allwood, SE
Elisabeth Ahlsén, SE
Elisabeth André, D
Gerard Bailly, F
Stéphanie Buisine, F
Susanne Burger, USA
Loredana Cerrato, SE
Piero Cosi, I
Morena Danieli, I
Nicolas Ech Chafai, F
John Glauert, UK
Kostas Karpouzis, G
Alfred Kranstedt, D
Peter Kuehnlein, NL
Daniel Loehr, USA
Maurizio Mancini, F
Costanza Navarretta, DK
Catherine Pelachaud, F
Fabio Pianesi, I
Isabella Poggi, I
Laurent Romary, D
Ielka van der Sluis, UK
Rainer Stiefelhagen, D
Peter Wittenburg, NL
Massimo Zancanaro, I
--------------------------------------------------
WORKSHOP PROGRAMME
--------------------------------------------------
9.00 Welcome
SESSION "MULTIMODAL EXPRESSION OF EMOTION"
9.15
Annotation of Cooperation and Emotions in Map Task Dialogues
(Federica Cavicchio and Massimo Poesio)
9.45
Double Level Analysis of the Multimodal Expressions of Emotions in
Human-machine Interaction
(Jean-Marc Colletta, Ramona Kunene, Aurélie Venouil and Anna
Tcherkassof)
10.15 - 10.45 coffee break
SESSION "MULTIMODALITY AND CONVERSATION"
10.45
Multimodality in Conversation Analysis: a Case of Greek TV Interviews
(Maria Koutsombogera, Lida Touribaba and Harris Papageorgiou)
11.15
The MUSCLE Movie Database: A Multimodal Corpus with Rich Annotation
for Dialogue and Saliency Detection
(Dimitrios Spachos, Athanasia Zlantintsi, Vassiliki Moschou,
Panagiotis Antonopoulos, Emmanouil Benetos, Margarita Kotti, Katerina
Tzimouli, Constantine Kotropoulos, Nikos Nikolaidis, Petros Maragos
and Ioannis Pitas)
SESSION "MULTIMODAL ANALYSIS OF ACTIVITIES"
11.45
A Multimodal Data Collection of Daily Activities in a Real
Instrumented Apartment
(Alessandro Cappelletti, Bruno Lepri, Nadia Mana, Fabio Pianesi and
Massimo Zancanaro)
12.15
Unsupervised Clustering in Multimodal Multiparty Meeting Analysis
(Yosuke Matsusaka, Yasuhiro Katagiri, Masato Ishizaki and Mika
Enomoto)
12.45 Discussion
13.00 - 14.30 LUNCH
SESSION "INDIVIDUAL DIFFERENCES IN MULTIMODAL BEHAVIORS"
14.30
Multimodal Intercultural Information and Communication Technology:
A Conceptual Framework for Designing and Evaluating Multimodal
Intercultural ICT
(Jens Allwood and Elisabeth Ahlsén)
15.00
Multitrack Annotation of Child Language and Gestures
(Jean-Marc Colletta, Aurélie Venouil, Ramona Kunene, Virginie Kaufmann
and Jean-Pascal Simon)
15.30
The Persuasive Impact of Gesture and Gaze
(Isabella Poggi and Laura Vincze)
16.00-16.30 coffee break
16.30
On the Contextual Analysis of Agreement Scores
(Dennis Reidsma, Dirk Heylen and Rieks Op den Akker)
SESSION "PROCESSING AND INDEXING OF MULTIMODAL CORPORA"
17.00
Dutch Multimodal Corpus for Speech Recognition
(Alin G. Chitu and Leon J.M. Rothkrantz)
17.30
Multimodal Data Collection in The AMASS++ project
(Scott Martens, Jan Hendrik Becker, Tinne Tuytelaars and
Marie-Francine Moens)
18.00
The Nottingham Multi-modal Corpus: A Demonstration
(Dawn Knight, Svenja Adolphs, Paul Tennent and Ronald Carter)
18.15
Analysing Interaction: A Comparison of 2D and 3D techniques
(Stuart A. Battersby, Mary Lavelle, Patrick G.T. Healey and Rosemarie
McCabe)
18.30
Discussion
19.00 END OF WORKSHOP
(Followed by an informal dinner)
------------------------------------------------------------------
-------------------------------------------------------------------------
Message diffuse par la liste Langage Naturel <LN at cines.fr>
Informations, abonnement : http://www.atala.org/article.php3?id_article=48
English version :
Archives : http://listserv.linguistlist.org/archives/ln.html
http://liste.cines.fr/info/ln
La liste LN est parrainee par l'ATALA (Association pour le Traitement
Automatique des Langues)
Information et adhesion : http://www.atala.org/
-------------------------------------------------------------------------
More information about the Ln
mailing list