36.409, Confs: Applied Linguistics; Computational Linguistics; Semantics; Text/Corpus Linguistics / Czech Republic
The LINGUIST List
linguist at listserv.linguistlist.org
Fri Jan 31 08:05:02 UTC 2025
LINGUIST List: Vol-36-409. Fri Jan 31 2025. ISSN: 1069 - 4875.
Subject: 36.409, Confs: Applied Linguistics; Computational Linguistics; Semantics; Text/Corpus Linguistics / Czech Republic
Moderator: Steven Moran (linguist at linguistlist.org)
Managing Editor: Justin Fuller
Team: Helen Aristar-Dry, Steven Franks, Joel Jenkins, Daniel Swanson, Erin Steitz
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org
Homepage: http://linguistlist.org
Editor for this issue: Erin Steitz <ensteitz at linguistlist.org>
================================================================
Date: 31-Jan-2025
From: Shira Wein [swein at amherst.edu]
Subject: 6th International Workshop on Designing Meaning Representations
DMR 2025
Date: 04-Aug-2025 - 05-Aug-2025
Location: Prague, Czechia, Czech Republic
Meeting URL: https://dmr2025.github.io
Linguistic Field(s): Applied Linguistics; Computational Linguistics;
Semantics; Text/Corpus Linguistics
DMR 2025
The 6th International Workshop on Designing Meaning Representations
To be held in beautiful Prague, Czechia, August 4-5, 2025, following
ACL 2025 in Vienna, Austria.
DMR 2025 invites the submissions of long and short papers about
original works on the design, processing, and use of meaning
representations. While deep learning methods have led to many
breakthroughs in practical natural language applications, there is
still a sense among many NLP researchers that we have a long way to go
before we can develop systems that can actually “understand” human
language and explain the decisions they make. Indeed, “understanding”
natural language entails many different human-like capabilities, and
they include but are not limited to the ability to track entities in a
text, understand the relations between these entities, track events
and their participants described in a text, understand how events
unfold in time, and distinguish events that have actually happened
from events that are planned or intended, are uncertain, or did not
happen at all. We believe a critical step in achieving natural
language understanding is to design meaning representations for text
that have the necessary meaning “ingredients” that help us achieve
these capabilities. Such meaning representations can also potentially
be used to evaluate the compositional generalization capacity of deep
learning models.
There has been a growing body of research devoted to the design,
annotation, and parsing of meaning representations in recent years. In
particular, formal meaning representation frameworks such as Minimal
Recursion Semantics (MRS) and Discourse Representation Theory are
developed with the goal of supporting logical inference in
reasoning-based AI systems and are therefore easily translatable into
first-order logic, while other meaning representation frameworks such
as Abstract Meaning Representation (AMR), Uniform Meaning
Representation (UMR), Tecto-grammatical Representation (TR) in Prague
Dependency Treebanks and the Universal Conceptual Cognitive Annotation
(UCCA), put more emphasis on the representation of core
predicate-argument structure. The automatic parsing of natural
language text into these meaning representations and the generation of
natural language text from these meaning representations are also very
active areas of research, and a wide range of technical approaches and
learning methods have been applied to these problems.
DMR intends to bring together researchers who are producers and
consumers of meaning representations and, through their interaction,
gain a deeper understanding of the key elements of meaning
representations that are the most valuable to the NLP community. The
workshop will provide an opportunity for meaning representation
researchers to present new frameworks and to critically examine
existing frameworks with the goal of using their findings to inform
the design of next-generation meaning representations. One particular
goal is to understand the relationship between distributed meaning
representations trained on large data sets using network models and
the symbolic meaning representations that are carefully designed and
annotated by NLP researchers, with an aim of gaining a deeper
understanding of areas where each type of meaning representation is
the most effective.
The workshop solicits papers that address one or more of the following
topics:
Development and annotation of meaning representations;
Challenges and techniques in leveraging meaning representations for
downstream applications, including neuro-symbolic approaches;
The relationship between symbolic meaning representations and
distributed semantic representations;
Issues in applying meaning representations to multilingual settings
and lower-resourced languages;
Challenges and techniques in automatic parsing of meaning
representations;
Challenges and techniques in automatically generating text from
meaning representations;
Meaning representation evaluation metrics;
Cross-framework comparison of meaning representations and their formal
properties;
Any other topics that address the design, processing, and use of
meaning representations.
Important dates:
Workshop papers due: April 21, 2025
Notification of acceptance: June 16, 2025
Camera-ready papers due: July 1, 2025
Workshop date: August 4-5, 2025
All deadlines are 11:59pm UTC-12 ("anywhere on Earth").
Paper Submission and Templates
We accept long papers (describing substantial original research) of up
to eight (8) pages and short papers (making a small, focused
contribution) of up to four (4) pages. If a paper is accepted, the
authors will be given an additional page to address reviewers’
comments in the final version. The ethics statement (optional),
limitations (optional), references, and appendices do not count
against these limits. Long and short papers will be directly submitted
via OpenReview at the following link:
https://openreview.net/group?id=DMR/2025
Paper submissions must use the official ACL style templates, which are
available here (Latex and Word). Submissions that do not conform to
the required styles, including paper size, margin width, and font size
restrictions, will be rejected without review.
Dual Submission Policy
Dual submissions are allowed. Authors of papers that have been or will
be submitted to other meetings or publications must provide this
information to the workshop co-chairs (dmr.workshop.2025 at gmail.com).
In your message, please list the names and dates of the conferences,
workshops or meetings where you have submitted or plan to submit your
paper in addition to DMR. Authors of accepted papers must notify the
program chairs within 5 business days of acceptance if the paper is
withdrawn for any reason.
Other Questions
If you have any questions, please feel free to contact the program
co-chairs (dmr.workshop.2025 at gmail.com) and see the workshop website
(dmr2025.github.io).
------------------------------------------------------------------------------
********************** LINGUIST List Support ***********************
Please consider donating to the Linguist List to support the student editors:
https://www.paypal.com/donate/?hosted_button_id=87C2AXTVC4PP8
LINGUIST List is supported by the following publishers:
Bloomsbury Publishing http://www.bloomsbury.com/uk/
Cambridge University Press http://www.cambridge.org/linguistics
Cascadilla Press http://www.cascadilla.com/
De Gruyter Mouton https://cloud.newsletter.degruyter.com/mouton
Elsevier Ltd http://www.elsevier.com/linguistics
John Benjamins http://www.benjamins.com/
Language Science Press http://langsci-press.org
Multilingual Matters http://www.multilingual-matters.com/
Netherlands Graduate School of Linguistics / Landelijke (LOT) http://www.lotpublications.nl/
Wiley http://www.wiley.com
----------------------------------------------------------
LINGUIST List: Vol-36-409
----------------------------------------------------------
More information about the LINGUIST
mailing list