34.741, Calls: Fourth International Workshop on Designing Meaning Representation

The LINGUIST List linguist at listserv.linguistlist.org
Fri Mar 3 15:12:25 UTC 2023


LINGUIST List: Vol-34-741. Fri Mar 03 2023. ISSN: 1069 - 4875.

Subject: 34.741, Calls: Fourth International Workshop on Designing Meaning Representation

Moderator: Malgorzata E. Cavar, Francis Tyers (linguist at linguistlist.org)
Managing Editor: Lauren Perkins
Team: Helen Aristar-Dry, Steven Franks, Everett Green, Sarah Robinson,
      Joshua Sims, Jeremy Coburn, Daniel Swanson, Matthew Fort,
      Maria Lucero Guillen Puon, Billy Dickson
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Hosted by Indiana University

Please support the LL editors and operation with a donation at:
           https://funddrive.linguistlist.org/donate/

Editor for this issue: Everett Green <everett at linguistlist.org>
================================================================


Date: Fri, 03 Mar 2023 15:11:49
From: Kristine Stenzel [Kristine.stenzel at colorado.edu]
Subject: Fourth International Workshop on Designing Meaning Representation

 
Full Title: Fourth International Workshop on Designing Meaning Representation 
Short Title: DMR 2023 

Date: 20-Jun-2023 - 20-Jun-2023
Location: Université de Lorraine, Nancy, France 
Contact Person: Kristine Stenzel
Meeting Email: Kristine.stenzel at colorado.edu
Web Site: http://dmr2023.github.io 

Linguistic Field(s): Computational Linguistics; Semantics 

Call Deadline: 03-Apr-2023 

Meeting Description:

While deep learning methods have led to many breakthroughs in practical
natural language applications, e.g. Machine Translation, Machine Reading,
Question Answering, Recognizing Textual Entailment, many NLP researchers still
feel there is a long way to go before we can develop systems that can actually
“understand” human language and explain the decisions they make. Indeed,
“understanding” natural language entails many human-like capabilities,
including the ability to track entities in a text and understand the relations
between them, track events and their participants, understand how events
unfold in time, and distinguish between events that have happened and those
that are planned, intended, uncertain, or did not happen at all. A critical
step in achieving natural language understanding is to design meaning
representations for text that have the necessary meaning “ingredients” to help
us achieve these capabilities. Such meaning representations can also
potentially be used to evaluate the compositional generalization capacity of
deep learning models.

This workshop will bring together researchers who are producers and consumers
of meaning representations, and through their interaction develop a deeper
understanding of the key elements of meaning representations that are the most
valuable to the NLP community. The workshop will also provide an opportunity
for meaning representation researchers to critically examine existing
frameworks with the goal of using their findings to inform the design of
next-generation meaning representations. A third goal of the workshop is to
explore opportunities and identify challenges in the design and use of meaning
representations in multilingual settings. A final goal of the workshop is to
understand the relationship between distributed meaning representations
trained on large data sets using network models, and the symbolic meaning
representations that are carefully designed and annotated by NLP researchers
and gain a deeper understanding of areas where each type of meaning
representation is the most effective.

A growing body of recent research is devoted to the design, annotation, and
parsing of meaning representations. The meaning representations used for
semantic parsing research have been developed with different linguistic
perspectives and practical goals in mind and have different formal properties.
Formal meaning representation frameworks such as Minimal Recursion Semantics
(MRS) and Discourse Representation Theory (as exemplified in the Parallel
Meaning Bank) aim to support logical inference in reasoning-based AI systems
and are therefore easily translatable into first-order logic, requiring proper
representation of semantic components such as quantification, negation, tense,
and modality. Other frameworks such as Abstract Meaning Representation (AMR),
Tecto-grammatical Representation (TR) in Prague Dependency Treebanks and the
Universal Conceptual Cognitive Annotation (UCCA), emphasize the representation
of core predicate-argument structure, lexical semantic information such as
semantic roles and word senses, or named entities and relations. A more recent
effort is developing a Uniform Meaning Representation (UMR) that is based on
AMR but extends it to cross-linguistic settings and enhances it to represent
document-level semantic content. The automatic parsing of natural language
text into these meaning representations and the subsequent generation of
natural language text from them re also very active areas of research, and a
wide range of technical approaches and learning methods address these
problems.


Call for Papers:

The workshop solicits papers that address one or more of the following topics:
- Design and annotation of meaning representations;
- Cross-framework comparison of meaning representations;
- Challenges and techniques in automatic parsing of meaning representations;
- Challenges and techniques in automatically generating text from meaning
representations;
- Meaning representation evaluation metrics;
- Lexical resources, ontologies, and grounding in relation to meaning
representations;
- Real-world applications of meaning representations;
- Issues in applying meaning representations to multilingual settings and
lower-resourced languages;
- The relationship between symbolic meaning representations and distributed
semantic representations;
- Formal properties of meaning representations;
- Any other topics that address the design, processing, and use of meaning
representations.

=== SUBMISSION INFORMATION ===

Submissions should report original and unpublished research on topics of
interest to the workshop. Accepted papers are expected to be presented at the
workshop and will be published in the workshop proceedings on the ACL
Anthology. They should emphasize obtained results rather than intended work
and should clearly indicate the state of completion of the reported results. A
paper accepted for presentation at the workshop must not be or have been
presented at any other meeting with publicly available proceedings.

Submission is electronic, using the Softconf START conference management
system.
Link to the DMR submission site: https://softconf.com/iwcs2023/dmr2023/ 
Submissions must adhere to the two-column format of ACL venues. Please use our
specific style-files or the Overleaf template taken from ACL 2021:
https://www.overleaf.com/latex/templates/instructions-for-iwcs-2021-proceeding
s/fpnsyxqqpfbw 

Initial submissions should be fully anonymous to ensure double-blind
reviewing. Long papers must not exceed eight (8) pages of content. Short
papers and demonstration papers must not exceed four (4) pages of content. If
a paper is accepted, it will be given an additional page to address reviewers’
comments in the final version. References and appendices do not count against
these limits.

Reviewing of papers will be double-blind. Therefore, the paper must not
include the authors’ names and affiliations or self-references that reveal any
author’s identity–e.g., “We previously showed (Smith, 1991) …” should be
replaced with citations such as “Smith (1991) previously showed …”. Papers
that do not conform to these requirements will be rejected without review.
Authors of papers that have been or will be submitted to other meetings or
publications must provide this information to the workshop organizers
dmr2023-chairs at googlegroups.com. Authors of accepted papers must notify the
program chairs within 10 days of acceptance if the paper is withdrawn for any
reason.

** DMR 2023 does not have an anonymity period. However, we ask you to be
reasonable and not publicly advertise your preprint during (or right before)
review.

=== IMPORTANT DATES ===
Submissions due    April 3, 2023
Notification of acceptance  May 1, 2023
Camera-ready deadline   June 1, 2023
Workshop date            June 20, 2023
IWCS conference          June 20-23, 2023




------------------------------------------------------------------------------

***************************    LINGUIST List Support    ***************************
 The 2019 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
  to find out how to donate and check how your university, country or discipline
     ranks in the fund drive challenges. Or go directly to the donation site:
               https://iufoundation.fundly.com/the-linguist-list-2019

                        Let's make this a short fund drive!
                Please feel free to share the link to our campaign:
                    https://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-34-741	
----------------------------------------------------------





More information about the LINGUIST mailing list