36.1135, Confs: The Second Workshop on Multimodal Semantic Representations (Germany)

The LINGUIST List linguist at listserv.linguistlist.org
Wed Apr 2 12:05:16 UTC 2025


LINGUIST List: Vol-36-1135. Wed Apr 02 2025. ISSN: 1069 - 4875.

Subject: 36.1135, Confs: The Second Workshop on Multimodal Semantic Representations (Germany)

Moderator: Steven Moran (linguist at linguistlist.org)
Managing Editor: Justin Fuller
Team: Helen Aristar-Dry, Steven Franks, Joel Jenkins, Daniel Swanson, Erin Steitz
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Editor for this issue: Erin Steitz <ensteitz at linguistlist.org>

================================================================


Date: 02-Apr-2025
From: Richard Brutti [mmsr.workshop at gmail.com]
Subject: The Second Workshop on Multimodal Semantic Representations


The Second Workshop on Multimodal Semantic Representations
Short Title: MMSR II (Co-located with IWCS 2025)
Theme: Multimodal Semantic Representations

Date: 24-Sep-2025 - 24-Sep-2025
Location: Düsseldorf, Germany
Meeting URL: https://mmsr-workshop.github.io/

Linguistic Field(s): Computational Linguistics; Semantics; Text/Corpus
Linguistics

Submission Deadline: 27-Jun-2025

Multimodal Semantic Representations (MMSR II)
Co-located with IWCS 2025 (https://iwcs2025.github.io/)
22-24 September, Düsseldorf, Germany
(workshop on 24 September)
Workshop website: https://mmsr-workshop.github.io/
Description:
The demand for more sophisticated natural human-computer and
human-robot interactions is rapidly increasing as users become more
accustomed to conversation-like interactions with AI and NLP systems.
Such interactions require not only the robust recognition and
generation of expressions through multiple modalities (language,
gesture, vision, action, etc.), but also the encoding of situated
meaning.
When communications become multimodal, each modality in operation
provides an orthogonal angle through which to probe the computational
model of the other modalities, including the behaviors and
communicative capabilities afforded by each. Multimodal interactions
thus require a unified framework and control language through which
systems interpret inputs and behaviors and generate informative
outputs. This is vital for intelligent and often embodied systems to
understand the situation and context that they inhabit, whether in the
real world or in a mixed-reality environment shared with humans.
Furthermore, multimodal large language models appear to offer the
possibility for more dynamic and contextually rich interactions across
various modalities, including facial expressions, gestures, actions,
and language. We invite discussion on how representations and
pipelines can potentially integrate such state-of-the-art language
models.
We solicit papers on multimodal semantic representation, including but
not limited to the following topics:
 - Semantic frameworks for individual linguistic co-modalities (e.g.
gaze, facial expression);
 - Formal representation of situated conversation and embodiment,
including knowledge graphs, designed to represent epistemic state;
 - Design, annotation, and corpora of multimodal interaction and
meaning representation;
 - Challenges (including cross-lingual and cross-cultural) in
multimodal representation and/or processing;
 - Criteria or frameworks for evaluation of multimodal semantics;
 - Challenges in aligning co-modalities in formal representation
and/or NLP tasks;
 - Design and implementation of neurosymbolic or fusion models for
multimodal processing (with a representational component);
 - Methods for probing knowledge of multimodal (language and vision)
models;
 - Virtual and situated agents that embody multimodal representations
of common ground.
Submission Information:
Two types of submissions are solicited: long papers and short papers.
Long papers should describe original research and must not exceed 8
pages, excluding references. Short papers (typically system or project
descriptions, or ongoing research) must not exceed 4 pages, excluding
references. Accepted papers get an extra page in the camera-ready
version.
We strongly encourage students to submit to the workshop.
Papers should be formatted using the IWCS style files, available at:
https://iwcs2025.github.io/call_for_papers
Submission Link:
https://openreview.net/group?id=IWCS/2025/Workshop/MMSR
Please do not hesitate to reach out with any questions.
Richard Brutti, Lucia Donatelli, Nikhil Krishnaswamy, Kenneth Lai, &
James Pustejovsky (MMSR II organizers)



------------------------------------------------------------------------------

********************** LINGUIST List Support ***********************
Please consider donating to the Linguist List to support the student editors:

https://www.paypal.com/donate/?hosted_button_id=87C2AXTVC4PP8

LINGUIST List is supported by the following publishers:

Bloomsbury Publishing http://www.bloomsbury.com/uk/

Cambridge University Press http://www.cambridge.org/linguistics

Cascadilla Press http://www.cascadilla.com/

De Gruyter Mouton https://cloud.newsletter.degruyter.com/mouton

Edinburgh University Press http://www.edinburghuniversitypress.com

Elsevier Ltd http://www.elsevier.com/linguistics

John Benjamins http://www.benjamins.com/

Language Science Press http://langsci-press.org

Lincom GmbH https://lincom-shop.eu/

Multilingual Matters http://www.multilingual-matters.com/

Netherlands Graduate School of Linguistics / Landelijke (LOT) http://www.lotpublications.nl/

Oxford University Press http://www.oup.com/us

Wiley http://www.wiley.com


----------------------------------------------------------
LINGUIST List: Vol-36-1135
----------------------------------------------------------



More information about the LINGUIST mailing list