Appel: ICMI-MLMI 2009

Thierry Hamon thierry.hamon at LIPN.UNIV-PARIS13.FR
Fri Feb 6 20:52:16 UTC 2009


Date:         Tue, 3 Feb 2009 18:47:01 -0500
From:         Yang Liu <yangl at HLT.UTDALLAS.EDU>
Message-ID:  <LISTSERV%200902031847013590.F169 at LISTSERV.ACM.ORG>
X-url: http://listserv.acm.org/scripts/wa.exe?LIST=ICMI-MULTIMODAL-ANNOUNCE


***** First Call for Papers ********
***** Submission deadline: May 22, 2009 *******

ICMI-MLMI 2009

Cambridge, MA, USA,
November 2-6 2009
sponsored by ACM SIGCHI

The Eleventh International Conference on Multimodal Interfaces and The
Sixth Workshop on Machine Learning for Multimodal Interaction will
jointly take place in the Boston area from November 2-6, 2009. The
main aim of ICMI-MLMI 2009 is to further scientific research within
the broad field of multimodal interaction, methods and systems. This
joint conference will focus on major trends and challenges in this
area, and work to identify a roadmap for future research and
commercial success. ICMI-MLMI 2009 will feature a single-track main
conference with keynote speakers, panel discussions, technical paper
presentations, poster sessions, and demonstrations of state of the art
multimodal systems and concepts. It will be followed by workshops.


Venue:
The conference will take place at the MIT Media Lab, widely known for
its innovative spirit. Organized in Cambridge, Massachusetts, USA,
ICMI-MLMI 2009 provides an excellent setting for brainstorming and
sharing the latest advances in multimodal interaction, systems, and
methods in a city known as one of the top historical, technological,
and scientific centers of the US.


Important dates:

 Workshop proposals March 1, 2009
 Special Sessions proposals March 1, 2009
 Paper submission May 22, 2009
 Author notification July 20, 2009
 Camera-ready due August 20, 2009
 Conference Nov 2-4, 2009
 Workshops Nov 5-6, 2009

Topics of interest:

Multimodal and multimedia processing:
 Algorithms for multimodal fusion and multimedia fission
 Multimodal output generation and presentation planning
 Multimodal discourse and dialogue modeling
 Generating non-verbal behaviors for embodied conversational agents
 Machine learning methods for multimodal processing

Multimodal input and output interfaces:
 Gaze and vision-based interfaces
 Speech and conversational interfaces
 Pen-based interfaces
 Haptic interfaces
 Interfaces to virtual environments or augmented reality
 Biometric interfaces combining multiple modalities
 Adaptive multimodal interfaces

Multimodal applications:
 Mobile interfaces
 Meeting analysis and intelligent meeting spaces
 Interfaces to media content and entertainment
 Human-robot interfaces and human-robot interaction
 Vehicular applications and navigational aids
 Computer-mediated human to human communication
 Interfaces for intelligent environments and smart living spaces
 Universal access and assistive computing
 Multimodal indexing, structuring and summarization

Human interaction analysis and modeling:
 Modeling and analysis of multimodal human-human communication
 Audio-visual perception of human interaction
 Analysis and modeling of verbal and non-verbal interaction
 Cognitive modeling of users of interactive systems

Multimodal data, evaluation, and standards:
 Evaluation techniques and methodologies for multimodal interfaces
 Authoring techniques for multimodal interfaces
 Annotation and browsing of multimodal data
 Architectures and standards for multimodal interfaces


Paper Submission:

There are two different submission categories: regular paper and short
paper.  The page limit is 8 pages for regular papers and 4 pages for
short papers. The presentation style (oral or poster) will be decided
by the committee based on suitability and schedule.

Demo Submission:

Proposals for demonstrations shall be submitted to demo chairs
electronically.  A two page description with photographs of the
demonstration is required.

Doctoral Spotlight:

Funds are expected from NSF to support participation of doctoral
candidates at ICMI-MLMI 2009, and a spotlight session is planned to
showcase ongoing thesis work. Students interested in travel support
can submit a short or long paper as specified above.

Organizing committee

General Co-Chairs:
 James L. Crowley, INRIA, Grenoble, France
 Yuri A. Ivanov, MERL, Cambridge, USA
 Christopher R. Wren, Google, Cambridge, USA

Program Co-Chairs:
 Daniel Gatica-Perez, Idiap Research Institute, Martigny, Switzerland
 Michael Johnston, AT&T Labs Research, Florham Park, USA
 Rainer Stiefelhagen, University of Karlsruhe, Germany

Treasurer:
 Janet McAndlees, MERL, Cambridge, USA

Sponsorship
 Herve Bourlard, Idiap Research Institute, Martigny, Switzerland

Student Chair
 Rana el Kaliouby, MIT Media Lab, Cambridge, USA

Local Arrangements
 Cole Krumbholz, MITRE, Bedford, USA
 Deb Roy, MIT Media Lab, Cambridge, USA

Publicity
 Sonya Allin , University of Toronto, Canada
 Yang Liu, University of Texas at Dallas, USA

Publications
 Louis-Philippe Morency, University of South California, USA

Workshops
 Xilin Chen, Chinese Academy of Sciences, China
 Steve Renals, University of Edinburgh, Scotland

Demos
 Denis Lalanne, University of Fribourg, Germany
 Enrique Vidal, Polytechnic University of Valencia, Spain

Posters
 Kenji Mase, Nagoya University, Japan

-------------------------------------------------------------------------
Message diffuse par la liste Langage Naturel <LN at cines.fr>
Informations, abonnement : http://www.atala.org/article.php3?id_article=48
English version       : 
Archives                 : http://listserv.linguistlist.org/archives/ln.html
                                http://liste.cines.fr/info/ln

La liste LN est parrainee par l'ATALA (Association pour le Traitement
Automatique des Langues)
Information et adhesion  : http://www.atala.org/
-------------------------------------------------------------------------



More information about the Ln mailing list