36.3161, Calls: Workshop at SLE 2026: Large Language Models for Linguistics: Applications and Implications (Germany)
The LINGUIST List
linguist at listserv.linguistlist.org
Mon Oct 20 12:05:02 UTC 2025
LINGUIST List: Vol-36-3161. Mon Oct 20 2025. ISSN: 1069 - 4875.
Subject: 36.3161, Calls: Workshop at SLE 2026: Large Language Models for Linguistics: Applications and Implications (Germany)
Moderator: Steven Moran (linguist at linguistlist.org)
Managing Editor: Valeriia Vyshnevetska
Team: Helen Aristar-Dry, Mara Baccaro, Daniel Swanson
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org
Homepage: http://linguistlist.org
Editor for this issue: Valeriia Vyshnevetska <valeriia at linguistlist.org>
================================================================
Date: 20-Oct-2025
From: Nicole Katzir [nicole.katzir at gmail.com]
Subject: Workshop at SLE 2026: Large Language Models for Linguistics: Applications and Implications
Full Title: Workshop at SLE 2026: Large Language Models for
Linguistics: Applications and Implications
Short Title: SLE 2026
Date: 26-Aug-2026 - 28-Aug-2026
Location: Osnabrück, Germany
Contact Person: Nicole Katzir
Meeting Email: nicole.katzir at gmail.com
Web Site: https://societaslinguistica.eu/sle2026/list-of-workshops/
Linguistic Field(s): General Linguistics; Linguistic Theories
Call Deadline: 10-Nov-2025
2nd Call for Papers:
Abstracts (max. 300 words, excluding references) should be sent to
Natalia Levshina (natalia.levshina at ru.nl) and Nicole Katzir
(nicole.katzir at gmail.com) by November 10th.
Large Language Models (LLMs) are models with billions of parameters,
trained on vast amounts of text data to learn statistical patterns in
language, and able to generate, process, and predict human(-like)
text. As discussions at the recent SLE meeting and other venues
demonstrate, the rise of LLMs has major consequences for our field.
The apparent success of LLMs in producing output that can be difficult
to distinguish from human language, as well as their performance on
different linguistic tasks, has sparked intense debate about what, if
anything, can be inferred for linguistic theory. Others have focused
on the potential of LLMs as tools for data annotation or on describing
LLM-generated texts as a special “lect”. Alongside these scientific
debates and studies, the issue of the ethical implications of using
LLMs in research remains unresolved.
The goal of this workshop is to bring together linguists, cognitive
scientists, computational scientists and other experts to discuss how
LLMs intersect with linguistics. The primary aims of this theme
session are as follows:
- Contribute to a better understanding of the relevance of LLMs for
development of linguistic theories and methods;
- Detect the linguistic “fingerprints” of different LLMs;
- Appraise the impact of human-LLM interaction on human language and
communication;
- Formulate good practices of using (some) LLMs for linguistic
purposes;
- Analyse the linguistic framing of LLMs and other AI technologies
and give recommendations for speaking about them in scientific
discourse and media.
We invite contributions that may focus on specific subfields, or take
a broader perspective on large theoretical questions, from any
theoretical tradition. Below are some of the questions we would like
to address, but contributions are not limited to them.
- What are the consequences of LLMs for linguistic theory? The
impressive linguistic performance of LLMs has led some scholars to use
it as an argument against Chomskyan generative grammar (e.g.,
Piantadosi 2024) and in favour of usage-based connectionist models
(Goldberg 2023), whereas others (Chomsky et al. 2023; Kodner et al.
2023) have claimed that the fact that LLMs can approximate human
language does not tell us anything valuable about human language
itself. At the same time, it is difficult to deny the fact that LLMs
are able to “acquire” nontrivial syntactic generalizations, which
cannot be explained by simple heuristics or co-occurrence patterns in
the input data (Futrell & Mahowald 2025), such as filler-gap
dependencies (Suijkerbuijk et al. 2023) and recursive embedding
(Futrell et al. 2019). This raises the question: what are the
consequences of these successes for our understanding of how human
language is acquired, represented, and processed (cf. Contreras
Kallens et al. 2023)?
- How should we speak about LLMs? The linguistic framing of AI – for
example, as a tool or a companion – guides social attitudes and
behaviours towards these technologies (Petricini 2025). One often
hears that LLMs “understand”, “learn”, “think”, “reason” or
“hallucinate”. Not only is such anthropomorphic language erroneous,
but it can also lead to exploitation of users’ emotional dependence on
AI, misplaced trust, decreasing accountability of Big Tech, and other
negative consequences (DeVrio et al. 2025; Placani 2024). It falls to
linguists to analyze and challenge such language use, especially in
scientific communication.
- Can we use LLMs to facilitate linguistic research, and how? While
some academics dismiss the use of some AI technologies entirely as
ethically unacceptable (due to copyright violations, algorithmic
biases, environmental impact, exploitation, and other valid concerns,
cf. Guest et al. 2025), how can we employ at least some types of LLMs
as annotation tools or sources of data in a reliable and responsible
way? The potential of some models has been explored in
psycholinguistics (e.g., Wilcox et al. 2023), pragmatics and discourse
studies (Chen et al. 2024; Yu et al. 2024), syntax (Ambridge &
Blything 2024; Dunn & Eida 2025), corpus-based language comparison
(Koplenig et al. 2025), diachronic semantics (Levshina et al. 2024)
and other subfields, but a more systematic and critical discussion of
such uses is needed.
- What are the distinctive features of LLM output? The
state-of-the-art LLMs have essentially passed the Turing test, being
indistinguishable from human language in different settings, such as
textual conversations (Jones & Bergen 2025) and essay writing (Herbold
et al. 2023). Users’ flawed heuristics about human language can be
exploited for making LLMs sound more human than humans (Jakesch et al.
2022). However, some corpus-based studies have managed to identify
“fingerprints” of several models, especially instruction-tuned ones
(Reinhart et al. 2024). ChatGPT-generated texts also show more limited
register variation (Dentella et al. 2025).
- What is the role of LLMs as a driving factor of language change?
Although LLMs have been known to the general public for a relatively
short time, there are studies showing that they already have some
impact on human language. For example, words like delve and comprehend
have been on the rise (Yakura et al. 2024). How can we measure and
evaluate this impact, and what should we do about it?
- How to solve the data bottleneck? LLMs require huge amounts of
training data, which is only available for relatively few major
languages. As a result, while LLMs tend to excel in English, their
performance in low-resource languages struggles (Li et al. 2024;
Rahman et al. 2024). The same applies to non-standard linguistic
varieties, such as dialects and sociolects (Smith et al. 2025).
Consequently, the resources available for the speakers of these
varieties, as well as for researchers working on them, are limited.
Thus, it has been argued that LLMs reflect standard language ideology,
which posits hierarchies according to which some language varieties
are “better” and “correct” (Smith et al. 2025). How to solve this
language representation bias?
Please send your provisional abstract (max. 300 words, excluding
references) before November 10 to Natalia Levshina
(natalia.levshina at ru.nl) and Nicole Katzir (nicole.katzir at gmail.com).
If the theme session is accepted, you will be asked to submit a full
abstract by January 15. See more details about the procedure on the
conference webpage:
https://societaslinguistica.eu/sle2026/first-call-for-papers/
Full CfP with bibliography: https://tinyurl.com/LLMsForLinguistics
------------------------------------------------------------------------------
********************** LINGUIST List Support ***********************
Please consider donating to the Linguist List, a U.S. 501(c)(3) not for profit organization:
https://www.paypal.com/donate/?hosted_button_id=87C2AXTVC4PP8
LINGUIST List is supported by the following publishers:
Bloomsbury Publishing http://www.bloomsbury.com/uk/
Cambridge University Press http://www.cambridge.org/linguistics
Cascadilla Press http://www.cascadilla.com/
De Gruyter Brill https://www.degruyterbrill.com/?changeLang=en
Edinburgh University Press http://www.edinburghuniversitypress.com
John Benjamins http://www.benjamins.com/
Language Science Press http://langsci-press.org
Lincom GmbH https://lincom-shop.eu/
MIT Press http://mitpress.mit.edu/
Multilingual Matters http://www.multilingual-matters.com/
Narr Francke Attempto Verlag GmbH + Co. KG http://www.narr.de/
Netherlands Graduate School of Linguistics / Landelijke (LOT) http://www.lotpublications.nl/
Peter Lang AG http://www.peterlang.com
----------------------------------------------------------
LINGUIST List: Vol-36-3161
----------------------------------------------------------
More information about the LINGUIST
mailing list