35.972, Calls: Multimodal Semantic Representations

The LINGUIST List linguist at listserv.linguistlist.org
Mon Mar 18 18:05:11 UTC 2024


LINGUIST List: Vol-35-972. Mon Mar 18 2024. ISSN: 1069 - 4875.

Subject: 35.972, Calls: Multimodal Semantic Representations

Moderators: Malgorzata E. Cavar, Francis Tyers (linguist at linguistlist.org)
Managing Editor: Justin Fuller
Team: Helen Aristar-Dry, Steven Franks, Everett Green, Daniel Swanson, Maria Lucero Guillen Puon, Zackary Leech, Lynzie Coburn, Natasha Singh, Erin Steitz
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Please support the LL editors and operation with a donation at:
           https://funddrive.linguistlist.org/donate/

Editor for this issue: Zackary Leech <zleech at linguistlist.org>

LINGUIST List is hosted by Indiana University College of Arts and Sciences.
================================================================


Date: 15-Mar-2024
From: Lucia Donatelli [lucia.donatelli at gmail.com]
Subject: Multimodal Semantic Representations


Full Title: Multimodal Semantic Representations
Short Title: MMSR II

Date: 19-Oct-2024 - 20-Oct-2024
Location: Santiago de Compostela, Spain
Contact Person: Lucia Donatelli
Meeting Email: lucia.donatelli at gmail.com
Web Site: https://mmsr-workshop.github.io/

Linguistic Field(s): Computational Linguistics

Call Deadline: 15-May-2024

Meeting Description:

Multimodal Semantic Representations (MMSR II)
Co-located with ECAI 2024 (https://www.ecai2024.eu/)
19-24 October, Santiago de Compostela, Spain
(workshop on 19 or 20 October)

Workshop website: https://mmsr-workshop.github.io/

Description
The demand for more sophisticated natural human-computer and
human-robot interactions is rapidly increasing as users become more
accustomed to conversation-like interactions with AI and NLP systems.
Such interactions require not only the robust recognition and
generation of expressions through multiple modalities (language,
gesture, vision, action, etc.), but also the encoding of situated
meaning.

When communications become multimodal, each modality in operation
provides an orthogonal angle through which to probe the computational
model of the other modalities, including the behaviors and
communicative capabilities afforded by each. Multimodal interactions
thus require a unified framework and control language through which
systems interpret inputs and behaviors and generate informative
outputs. This is vital for intelligent and often embodied systems to
understand the situation and context that they inhabit, whether in the
real world or in a mixed-reality environment shared with humans.

Furthermore, multimodal large language models appear to offer the
possibility for more dynamic and contextually rich interactions across
various modalities, including facial expressions, gestures, actions,
and language. We invite discussion on how representations and
pipelines can potentially integrate such state-of-the-art language
models.

Call for Papers:

We solicit papers on multimodal semantic representation, including but
not limited to the following topics:
- Semantic frameworks for individual linguistic co-modalities (e.g.
gaze, facial expression);
- Formal representation of situated conversation and embodiment,
including knowledge graphs, designed to represent epistemic state;
- Design, annotation, and corpora of multimodal interaction and
meaning representation;
- Challenges (including cross-lingual and cross-cultural) in
multimodal representation and/or processing;
- Criteria or frameworks for evaluation of multimodal semantics;
- Challenges in aligning co-modalities in formal representation and/or
NLP tasks;
- Design and implementation of neurosymbolic or fusion models for
multimodal processing (with a representational component);
- Methods for probing knowledge of multimodal (language and vision)
models;
- Virtual and situated agents that embody multimodal representations
of common ground.

Submission Information
Two types of submissions are solicited: long papers and short papers.
Long papers should describe original research and must not exceed 8
pages, excluding references. Short papers (typically system or project
descriptions, or ongoing research) must not exceed 4 pages, excluding
references. Accepted papers get an extra page in the camera-ready
version.

We strongly encourage students to submit to the workshop.

Important Dates
May 15, 2024: Submissions due
June 1, 2024: Notification of acceptance decisions
June 21, 2024: Camera-ready papers due

Papers should be formatted using the ECAI style files, available at:
https://www.ecai2024.eu/calls/main-track

Papers will be submitted in PDF format via the chairing tool site,
with a workshop link available soon: https://chairingtool.com/

Please do not hesitate to reach out with any questions.

Richard Brutti, Lucia Donatelli, Nikhil Krishnaswamy, Kenneth Lai, &
James Pustejovsky (MMSR II organizers)
https://mmsr-workshop.github.io/



------------------------------------------------------------------------------

Please consider donating to the Linguist List https://give.myiu.org/iu-bloomington/I320011968.html


LINGUIST List is supported by the following publishers:

Cambridge University Press http://www.cambridge.org/linguistics

De Gruyter Mouton https://cloud.newsletter.degruyter.com/mouton

Equinox Publishing Ltd http://www.equinoxpub.com/

John Benjamins http://www.benjamins.com/

Lincom GmbH https://lincom-shop.eu/

Multilingual Matters http://www.multilingual-matters.com/

Narr Francke Attempto Verlag GmbH + Co. KG http://www.narr.de/

Wiley http://www.wiley.com


----------------------------------------------------------
LINGUIST List: Vol-35-972
----------------------------------------------------------



More information about the LINGUIST mailing list