37.982, Confs: MediaEval 2026 Shared Task: Missing Pieces and Misinformation: Identifying Social Media Posts with Implicit Messages (Netherlands)
The LINGUIST List
linguist at listserv.linguistlist.org
Tue Mar 10 17:05:02 UTC 2026
LINGUIST List: Vol-37-982. Tue Mar 10 2026. ISSN: 1069 - 4875.
Subject: 37.982, Confs: MediaEval 2026 Shared Task: Missing Pieces and Misinformation: Identifying Social Media Posts with Implicit Messages (Netherlands)
Moderator: Steven Moran (linguist at linguistlist.org)
Managing Editor: Valeriia Vyshnevetska
Team: Helen Aristar-Dry, Mara Baccaro, Daniel Swanson
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org
Homepage: http://linguistlist.org
Editor for this issue: Valeriia Vyshnevetska <valeriia at linguistlist.org>
================================================================
Date: 09-Mar-2026
From: Martial Pastor [martial.pastor at ru.nl]
Subject: MediaEval 2026 Shared Task: Missing Pieces and Misinformation: Identifying Social Media Posts with Implicit Messages
MediaEval 2026 Shared Task: Missing Pieces and Misinformation:
Identifying Social Media Posts with Implicit Messages
Short Title: MediaEval 2026
Theme: Missing Pieces and Misinformation: Identifying Social Media
Posts with Implicit Messages
Date: 15-Jun-2026 - 16-Jun-2026
Location: Amsterdam, Netherlands
Contact: Martial Pastor
Contact Email: martial.pastor at ru.nl
Meeting URL: https://multimediaeval.github.io/editions/2026/
Linguistic Field(s): Computational Linguistics; Discourse Analysis;
Pragmatics
Submission Deadline: 01-May-2026
We are pleased to announce the 1st Call for Participation in our
MediaEval 2026 shared task: Missing Pieces and Misinformation:
Identifying Social Media Posts with Implicit Messages.
Task Description:
Given a tweet, determine whether it contains an implicit premise, an
implicit conclusion, or neither. This is a three-class classification
task.
Input: The raw text of a tweet.
Output: One label: implicit_premise, implicit_conclusion, or none.
An implicit premise is a supporting assumption left unstated that the
argument relies on. An implicit conclusion is a claim that follows
from the stated premises but is never explicitly made. When neither
component is missing, the label is none.
Tweets in the train and dev sets are each annotated by five
independent annotators; those in the test set by three. Individual
annotator labels — prior to any majority vote — are provided alongside
the data, making it possible to treat disagreement as signal rather
than noise.
Participants are invited to complete two tasks. While they may choose
to complete only task 1, completion of task 2 is conditional upon
prior completion of task 1.
Task 1: “Enthymeme Detection” — Detecting the absence or presence of
enthymemes in tweets (three-class classification)
Constrained Run 1: Predict the label from the tweet text alone. No
external data or additional annotation information is permitted.
Constrained Run 2: In addition to the tweet text, use the raw
labels provided by three independent annotators. The goal is to
investigate whether modelling annotator disagreement improves
performance, especially on borderline cases. The output label is the
same three-class prediction.
Open Run: Any external data sources, pre-trained models, or
additional resources may be used. Participants must document all
external resources in their working-notes paper.
Task 2: “Proposition Generation” — For each tweet classified as
containing an implicit argument, generate the text of the missing
proposition. Task 2 requires prior completion of Task 1, as the
predicted label is part of the input.
Input: Tweet text + Task 1 label (implicit_premise or
implicit_conclusion).
Output: A single natural-language sentence expressing the missing
proposition.
The generated sentence should be concise and declarative — it should
make the unstated assumption or conclusion fully explicit, as if
completing the argument.
Example:
If the tweet contains the following text: “Deterring the plans of
illegal people smugglers is essential to controlled immigration. We
should support all plans to stop them.”
The full argument can be reconstructed as:
Premise 1 (implicit — to generate): Controlled immigration is
desirable.
Premise 2 (explicit): Deterring the plans of illegal people
smugglers is essential to controlled immigration.
Conclusion (explicit): We should support all plans to stop them.
In this example, the system should output: “Controlled immigration is
desirable.”
Participating teams will write short working-notes papers that are
published in the workshop proceedings (optional). We welcome two types
of papers: first, conventional benchmarking papers, which describe the
methods that teams use to address the task (enthymeme detection and
implicit proposition generation) and analyze the results across the
constrained and open runs; and second, “Quest for Insight” papers,
which address a research question aimed at gaining deeper
understanding of implicit argumentation, but do not necessarily
present complete task results. Example questions for “Quest for
Insight” papers include: How do different annotators interpret
implicit premises? What linguistic features best signal the presence
of enthymemes?
Participants are invited to build NLP models — or any other approach
(rule-based methods and explicit structural modeling are highly
encouraged!) — to:
1. Detect the presence of implicit content in short texts
2. Generate a single natural-language sentence expressing the missing
proposition, when implicit content is present
The data has been annotated by 5 independent annotators in order to
capture and study variation in semantic interpretation. This resource
is unprecedented, and we believe it will enable new insights in the
fields of rhetoric and argumentation theory, as well as open new
avenues for understanding how misinformation spreads through unstated
claims and implicit stances in online communities.
Task info & registration:
https://multimediaeval.github.io/editions/2026/tasks/enthymeme/
Explore the dataset, annotation guidelines & argumentation framework:
https://turfutoday.com/enthymemes/
Make sure you register to get access to the data. It's simple and
fast.
Important Dates:
- Registration: Open now!
- First sample data release: March 2026
- Run submission deadline: 1 May 2026
- Workshop: 15–16 June 2026 — Amsterdam + Online
We welcome interdisciplinary teams from NLP, computational
linguistics, argumentation theory, philosophy, rhetoric, communication
studies, and political science.
------------------------------------------------------------------------------
********************** LINGUIST List Support ***********************
Please consider donating to the Linguist List, a U.S. 501(c)(3) not for profit organization:
https://www.paypal.com/donate/?hosted_button_id=87C2AXTVC4PP8
LINGUIST List is supported by the following publishers:
Bloomsbury Publishing http://www.bloomsbury.com/uk/
Cambridge University Press http://www.cambridge.org/linguistics
Cascadilla Press http://www.cascadilla.com/
De Gruyter Brill https://www.degruyterbrill.com/?changeLang=en
Edinburgh University Press http://www.edinburghuniversitypress.com
European Language Resources Association (ELRA) http://www.elra.info
John Benjamins http://www.benjamins.com/
Language Science Press http://langsci-press.org
Lincom GmbH https://lincom-shop.eu/
MIT Press http://mitpress.mit.edu/
Multilingual Matters http://www.multilingual-matters.com/
Narr Francke Attempto Verlag GmbH + Co. KG http://www.narr.de/
Netherlands Graduate School of Linguistics / Landelijke (LOT) http://www.lotpublications.nl/
Peter Lang AG http://www.peterlang.com
SIL International Publications http://www.sil.org/resources/publications
----------------------------------------------------------
LINGUIST List: Vol-37-982
----------------------------------------------------------
More information about the LINGUIST
mailing list