<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
[Apologies for cross-postings]<br>
<br>
<b>LREC 2012 Workshop <br>
Multimodal Corpora: How should multimodal corpora deal with the
situation?</b><br>
<br>
1st Call for Papers <br>
22 May 2012, Istanbul, Turkey <br>
<br>
<a class="moz-txt-link-freetext" href="http://www.multimodal-corpora.org/">http://www.multimodal-corpora.org/</a><br>
<br>
Currently, the creation of a multimodal corpus involves the
recording, annotation and analysis of a selection of many possible
communication modalities such as speech, hand gesture, facial
expression, and body posture. Simultaneously, an increasing number
of research areas are transgressing from focused single modality
research to full-fledged multimodality research. Multimodal corpora
are becoming a core research asset and they provide an opportunity
for interdisciplinary exchange of ideas, concepts and data. The
increasing interest in multimodal communication and multimodal
corpora evidenced by European Networks of Excellence and integrated
projects such as HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet; the
success of recent conferences and workshops dedicated to multimodal
communication (ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium on
Multimodal Communication, Embodied Language Processing); and the
creation of the Journal of Multimodal User Interfaces also testifies
to the growing interest in this area, and the general need for data
on multimodal behaviours.<br>
In 2012, the 8th Workshop on Multimodal Corpora will again be
collocated with LREC. This year, LREC has selected Speech and
Multimodal Resources as its special topic. This points to the
significance of the workshop’s general scope, and the fact that the
main conference special topic largely covers the broad scope of the
workshop provides us with a unique opportunity to step outside the
boundaries and look further into the future.<br>
The workshop follows similar events held at LREC 00, 02, 04, 06, 08,
10, and ICMI 11. All workshops are documented under
<a class="moz-txt-link-abbreviated" href="http://www.multimodal-corpora.org">www.multimodal-corpora.org</a> and complemented by a special issue of
the Journal of Language Resources and Evaluation which came out in
2008 and a state-of-the-art book published by Springer in 2009.<br>
<br>
<b>Aims</b><br>
As always, we aim for a wide cross-section of the field, with
contributions ranging from collection efforts, coding, validation
and analysis methods, to tools and applications of multimodal
corpora. This year, however, we also want to look ahead and
emphasize the fact that a growing segment of research takes a view
of spoken language as situated action, where linguistic and
non-linguistic actions are intertwined with the dynamic conditions
given by the situation and the place in which the actions occur. In
spite of this, most corpora capture little more than the linguistic
and meta-linguistic actions per se, and contain little or no
information about the situation in which they take place. For this
reason, we encourage contributions that raise the question of what
the additions to future multimodal corpora will be – with
possibilities ranging from simple dynamic information such as
background noise, room temperature, light conditions and room
dimensions to more complex models of room contents, external events,
scents, or cognitive load modelling including physiological data
such as breathing or pulse. We hope that with your help, the
workshop will serve to examine the way language is conceived in
corpus creation and to spark a discussion of its boundaries and how
these should be accounted for in annotations and in interpretation.<br>
<br>
<b>Time schedule</b><br>
The workshop will consist of a morning session and an afternoon
session. There will be time for collective discussions.<br>
<br>
<b>Topics</b><br>
The LREC'2012 workshop on multimodal corpora will feature a special
session on the collection, annotation and analysis of corpora of
situated interaction.<br>
<br>
Other topics to be addressed include, but are not limited to:<br>
<ul>
<li>Multimodal corpus collection activities (e.g. direction-giving
dialogues, emotional behaviour, human-avatar interaction,
human-robot interaction, etc.) and descriptions of existing
multimodal resources</li>
<li>Relations between modalities in natural (human) interaction
and in human-computer interaction</li>
<li>Multimodal interaction in specific scenarios, e.g. group
interaction in meetings</li>
<li>Coding schemes for the annotation of multimodal corpora</li>
<li>Evaluation and validation of multimodal annotations</li>
<li>Methods, tools, and best practices for the acquisition,
creation, management, access, distribution, and use of
multimedia and multimodal corpora</li>
<li>Interoperability between multimodal annotation tools (exchange
formats, conversion tools, standardization)</li>
<li>Collaborative coding</li>
<li>Metadata descriptions of multimodal corpora</li>
<li>Automatic annotation, based e.g. on motion capture or image
processing, and the integration with manual annotations</li>
<li>Corpus-based design of multimodal and multimedia systems, in
particular systems that involve human-like modalities either in
input (Virtual Reality, motion capture, etc.) and output
(virtual characters)</li>
<li>Automated multimodal fusion and/or generation (e.g.,
coordinated speech, gaze, gesture, facial expressions)</li>
<li>Machine learning applied to multimodal data</li>
<li>Multimodal dialogue modelling</li>
</ul>
<b>Important dates</b><br>
<ul>
<li>Deadline for paper submission (complete paper): 12 February
2012</li>
<li>Notification of acceptance: 10 March</li>
<li>Final version of accepted paper: 26 March</li>
<li>Final program and proceedings: 20 April</li>
<li>Workshop: 22 May</li>
</ul>
<br>
<b>Submissions</b><br>
The workshop will consist primarily of paper presentations and
discussion/working sessions. Submissions should be 4 pages long,
must be in English, and follow the submission guidelines at<br>
<a class="moz-txt-link-freetext" href="http://www.lrec-conf.org/lrec2012/">http://www.lrec-conf.org/lrec2012/</a><br>
Submission should be made at:
<a class="moz-txt-link-freetext" href="https://www.softconf.com/lrec2012/MMCorpora2012/">https://www.softconf.com/lrec2012/MMCorpora2012/</a><br>
Demonstrations of multimodal corpora and related tools are
encouraged as well (a demonstration outline of 2 pages can be
submitted).<br>
<br>
<b>LREC Map of Language Resources, Technologies and Evaluation</b><br>
When submitting a paper, from the START page authors will be asked
to provide essential information about resources (in a broad sense,
i.e. also technologies, standards, evaluation kits, etc.) that
either have been used for the work described in the paper or are a
new result of your research (contribution to building the LREC2012
Map).<br>
<br>
<b>Organizing committee</b><br>
Jens Edlund, KTH Royal Institute of Technology, Sweden<br>
Dirk Heylen, University of Twente, The Netherlands<br>
Patrizia Paggio, University of Copenhagen, Denmark/University of
Malta, Malta
</body>
</html>