36.2091, Confs: More Than Just Noise: Detecting Patterns in Acceptability Judgment Data (DGfS 2026 Workshop) (Germany)
The LINGUIST List
linguist at listserv.linguistlist.org
Tue Jul 8 09:05:02 UTC 2025
LINGUIST List: Vol-36-2091. Tue Jul 08 2025. ISSN: 1069 - 4875.
Subject: 36.2091, Confs: More Than Just Noise: Detecting Patterns in Acceptability Judgment Data (DGfS 2026 Workshop) (Germany)
Moderator: Steven Moran (linguist at linguistlist.org)
Managing Editor: Valeriia Vyshnevetska
Team: Helen Aristar-Dry, Mara Baccaro, Daniel Swanson
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org
Homepage: http://linguistlist.org
Editor for this issue: Valeriia Vyshnevetska <valeriia at linguistlist.org>
================================================================
Date: 07-Jul-2025
From: Sarah Zobel [sarah.zobel at germanistik.uni-hannover.de]
Subject: More Than Just Noise: Detecting Patterns in Acceptability Judgment Data (DGfS 2026 Workshop)
More Than Just Noise: Detecting Patterns in Acceptability Judgment
Data (DGfS 2026 Workshop)
Date: 25-Feb-2026 - 27-Feb-2026
Location: Trier, Germany
Contact Email: dgfs26.ag7.ajt at gmail.com
Linguistic Field(s): General Linguistics; Neurolinguistics;
Psycholinguistics
Submission Deadline: 31-Aug-2025
AG7 of the Annual Meeting of the German Linguistic Society:
https://www.uni-trier.de/universitaet/fachbereiche-faecher/fachbereich-ii/forschung-und-zentren/dgfs2026
Organized by: Jana Häussler (Uni Bielefeld), Thomas Weskott (Uni
Göttingen), Sarah Zobel (Uni Hannover / HU Berlin)
Linguistic acceptability is one of the major tools to detect patterns
in language: our intuitions about whether a sentence is "good" or
"bad" are a source of evidence that is readily accessible and easy to
communicate. Since the advent of more rigorous measurement of
linguistic acceptability in the 1990s (Cowart 1997, Schütze 1996), the
acceptability judgment task (AJT) has been used in controlled
experiments employing factorial designs to collect judgments from
samples of multiple participants for samples of multiple items;
statistical regression methods are utilized to separate information
about the underlying linguistic patterns from the "noise", i.e., the
variance generated by repeated measurements. Subdisciplines that are
traditionally more theoretically inclined, like syntax, semantics, and
pragmatics, have hugely profited from the empirical progress that this
experimental approach has engendered, as witnessed by the publication
of handbooks like, e.g., Cummins and Katsos (eds., 2019), and Goodall
(ed., 2021).
While the usefulness of AJT data to inform us about linguistic
patterns is undebatable, we think that the true potential of the
method as a source of insights about language patterns is
underestimated. This, we argue, is due to an erroneous assumptions
about the AJT underlying current experimental practice: the variance
generated by interindividual differences between participants (think,
e.g., of dialect, literacy, proficiency (including L2), or age), as
well as the variance that comes with testing multiple items, and the
possible interactions of these two sources of variance, are treated as
"random"; and any possible information these variances might
contain---beyond parameter estimation---is discarded by the
statistical procedures usually employed. This practice loses
information that is potentially informative about the complex way in
which humans react to linguistic stimuli---what Barr (2018) has called
"encounters", and which he proposes to consider as the unit of
generalization, rather than populations of speakers and/or items. This
loss of information is mostly due to the way in which studies
employing the AJT are designed: they are usually focused on the effect
of interest, disregarding how it might be related to other systematic
properties buried in the "random units". A further loss of information
is due to the schematic use of statistical procedures like mixed model
regression, which usually focuses on the difference between (a set of)
means, while disregarding the potentially informative properties of
the underlying distributions (cf. Kneib et al. 2023).
This workshop aims at addressing these two problems by discussing
contributions that propose to go beyond the current methodological
standard practice. We invite:
- contributions attempting to address the systematic effects of
participant-level properties in acceptability judgments; examples are
working memory, dialect/sociolect, age, proficiency, literacy; more
generally, contributions investigating just any property that
potentially affects the AJT in a systematic fashion are welcome
- contributions addressing item-level properties like complexity,
register, markedness/frequency, context sensitivity, etc.
- contributions that investigate the effect of item-level properties
on the behavior of acceptability measures, like different types of
benchmarking techniques, satiation effects, "squishing" and
"stretching" of scales, etc.
- contributions that employ novel statistical methods---on actual or
modelled data, and not necessarily limited to AJT data---that seek to
establish effects over and above mere differences between means like,
e.g., distributional regression (Bayesian or "classical");
- contributions that relate data from the AJT systematically to other
dependent variables, like truth value/felicity judgments, reading
time/eye tracking, or EEG data.
By bringing together researchers working on these different aspects of
the problem, we hope to initiate a discussion within the workshop, and
possibly beyond that, of how our use of acceptability measures in the
detection of linguistic patterns can be improved.
Submission Details:
Abstract length: a single page abstract of max. 350 words, plus up to
two pages containing details of experimental design, materials, and
statistic analyses (incl. graphs); note that the 350 words abstract
will be published in the conference booklet and thus should be
comprehensible without the additional materials; the abstracts with
the additional two pages will be published on OSF. Please note that
each participant may appear as the first author and as presenter on
only one submission.
Deadline for submission: Sunday, August 31, 23:59
Please send your abstract as an anonymous PDF to
dgfs26.ag7.ajt at gmail.com
Please add the names and affiliations of all authors in the body of
the email.
Important Dates:
- Call for papers opens: 7 July 2025
- Submission deadline: 31 August 2025
- Notification of acceptance: 15 September 2025
- Workshop: 25-27 February 2026
References:
Barr, Dale J. (2018). Generalizing over encounters: Statistical and
theoretical considerations. In: Rueschemeyer, Shirley-Ann & M. Gareth
Gaskell (eds.). Oxford Handbook of Psycholinguistics. Oxford, UK: OUP.
Cowart, Wayne (1997). Experimental Syntax. Applying Objective Methods
to Sentence Judgments. Thousand Oaks: SAGE.
Cummins, Chris, & Napoleon Katsos (eds., 2019). The Oxford Handbook
of Experimental Semantics and Pragmatics. Oxford, UK: OUP.
Kneib, Thomas, Alexander Silbersdorff, & Benjamin Säfken (2023). Rage
Against the Mean – A Review of Distributional Regression Approaches.
Econometrics and Statistics, Vol. 26, 99-123.
Schütze, Carston (2016). The empirical base of linguistics:
Grammaticality judgments and linguistic methodology. Berlin: LSP.
------------------------------------------------------------------------------
********************** LINGUIST List Support ***********************
Please consider donating to the Linguist List to support the student editors:
https://www.paypal.com/donate/?hosted_button_id=87C2AXTVC4PP8
LINGUIST List is supported by the following publishers:
Cascadilla Press http://www.cascadilla.com/
Language Science Press http://langsci-press.org
MIT Press http://mitpress.mit.edu/
Multilingual Matters http://www.multilingual-matters.com/
----------------------------------------------------------
LINGUIST List: Vol-36-2091
----------------------------------------------------------
More information about the LINGUIST
mailing list