35.1549, Calls: GermEval2024 Shared Task: GerMS-Detect -- Sexism Detection and Annotator Disagreement Prediction in German Online News Fora
The LINGUIST List
linguist at listserv.linguistlist.org
Sat May 18 22:05:10 UTC 2024
LINGUIST List: Vol-35-1549. Sat May 18 2024. ISSN: 1069 - 4875.
Subject: 35.1549, Calls: GermEval2024 Shared Task: GerMS-Detect -- Sexism Detection and Annotator Disagreement Prediction in German Online News Fora
Moderators: Malgorzata E. Cavar, Francis Tyers (linguist at linguistlist.org)
Managing Editor: Justin Fuller
Team: Helen Aristar-Dry, Steven Franks, Everett Green, Daniel Swanson, Maria Lucero Guillen Puon, Zackary Leech, Lynzie Coburn, Natasha Singh, Erin Steitz
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org
Homepage: http://linguistlist.org
Please support the LL editors and operation with a donation at:
https://funddrive.linguistlist.org/donate/
Editor for this issue: Helen Aristar-Dry <hdry at linguistlist.org>
LINGUIST List is hosted by Indiana University College of Arts and Sciences.
================================================================
Date: 16-May-2024
From: Brigitte Krenn [brigitte.krenn at ofai.at]
Subject: GermEval2024 Shared Task: GerMS-Detect -- Sexism Detection and Annotator Disagreement Prediction in German Online News Fora
Full Title: GermEval2024 Shared Task: GerMS-Detect -- Sexism Detection
in German Online News Fora
Short Title: GerMS-Detect
Date: 10-Sep-2024 - 10-Sep-2024
Location: Vienna, Austria
Contact Person: Brigitte Krenn
Meeting Email: brigitte.krenn at ofai.at
Web Site: https://ofai.github.io/GermEval2024-GerMS/
Linguistic Field(s): Computational Linguistics
Subject Language(s): German (deu)
Call Deadline: 25-Jun-2024
Meeting Description:
GerMS-Detect -- Sexism Detection in German Online News Fora collocated
with Konvens 2024 in Vienna is a GermEval2024 Shared Task that focuses
on sexism/misogyny detection in German online news fora. Since
sexism/misogyny in news forum comments is often present in a subtle
form that avoids outright offensiveness or curse words, there are many
texts where annotators have different opinions on whether the text
should be regarded as sexist, or which degree of sexism should be
assigned to it. This shared task therefore provides an opportunity to
learn about how to deal with diverging opinions among annotators and
how to train models on such a corpus which potentially can also inform
about how diverging the opinions might be on a new text.
The shared task is divided into two subtasks: Subtask 1 to predict a
binary label indicating the presence or absence of sexism in different
ways; Subtask 2 to predict binary soft labels, based on the different
opinions of annotators about the text and predict the distribution of
the original gradings by annotators. Each of the Subtask 1 and 2
competitions are organized into two different tracks: A closed track
in which models can only be trained with the provided training set,
and an open track where own training data, pretrained language
models, and whatever approaches of interest can be used.
GermEval2024 Shared Task: GerMS-Detect -- Sexism Detection and
Annotator Disagreement Prediction in German Online News Fora
2nd CALL FOR PARTICIPATION
We would like to invite you to the GermEval Shared Task GerMS-Detect
on Sexism Detection in German Online News Fora collocated with Konvens
2024.
Competition Website
Important Dates:
Development phase: May 1 - June 5, 2024 (ongoing)
Competition phase: June 7 - June 25, 2024
Paper submission due: July 1, 2024
Camera ready due: July 20, 2024
Shared Task @KONVENS: 10 September, 2024
Task description
This shared task is not just about the detection of sexism/misogyny in
comments posted in (mostly) German language to the comment section of
an Austrian online newspaper: many of the texts to be classified
contain ambiguous language, very subtle ways to express misogyny or
sexism or lack important context. For these reasons, there can be
quite some disagreement between annotators on the appropriate label.
In many cases, there is no single correct label. For this reason the
shared task is not just about correctly predicting a single label
chosen from all the labels assigned by human annotators, but about
models which can predict the level of disagreement, the range of
labels assigned by annotators or the distribution of labels to expect
for a specific group of annotators.
For details see the Competition Website.
Organizers
The task is organized by the Austrian Research Institute for
Artificial Intelligence (OFAI).
Organizing team:
Brigitte Krenn (brigitte.krenn (AT) ofai.at)
Johann Petrak (johann.petrak (AT) ofai.at)
Stephanie Gross (stephanie.gross (AT) ofai.at)
------------------------------------------------------------------------------
Please consider donating to the Linguist List https://give.myiu.org/iu-bloomington/I320011968.html
LINGUIST List is supported by the following publishers:
Cambridge University Press http://www.cambridge.org/linguistics
De Gruyter Mouton https://cloud.newsletter.degruyter.com/mouton
Equinox Publishing Ltd http://www.equinoxpub.com/
John Benjamins http://www.benjamins.com/
Lincom GmbH https://lincom-shop.eu/
Multilingual Matters http://www.multilingual-matters.com/
Narr Francke Attempto Verlag GmbH + Co. KG http://www.narr.de/
Wiley http://www.wiley.com
----------------------------------------------------------
LINGUIST List: Vol-35-1549
----------------------------------------------------------
More information about the LINGUIST
mailing list