36.3834, Confs: Pre-conference Workshop at ICAME47: Corpus and Computational Linguistics Meet Fake News, Mis- and Disinformation and Large Language Models (Germany)

The LINGUIST List linguist at listserv.linguistlist.org
Fri Dec 12 14:05:02 UTC 2025


LINGUIST List: Vol-36-3834. Fri Dec 12 2025. ISSN: 1069 - 4875.

Subject: 36.3834, Confs: Pre-conference Workshop at ICAME47: Corpus and Computational Linguistics Meet Fake News, Mis- and Disinformation and Large Language Models (Germany)

Moderator: Steven Moran (linguist at linguistlist.org)
Managing Editor: Valeriia Vyshnevetska
Team: Helen Aristar-Dry, Mara Baccaro, Daniel Swanson
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Editor for this issue: Valeriia Vyshnevetska <valeriia at linguistlist.org>

================================================================


Date: 12-Dec-2025
From: Dr. Silje Susanne Alvestad [s.s.alvestad at ilos.uio.no]
Subject: Pre-conference Workshop at ICAME47: Corpus and Computational Linguistics Meet Fake News, Mis- and Disinformation and Large Language Models


Pre-conference Workshop at ICAME47: Corpus and Computational
Linguistics Meet Fake News, Mis- and Disinformation and Large Language
Models

Date: 26-May-2026 - 26-May-2026
Location: Koblenz, Germany
Contact: Silje Susanne Alvestad
Contact Email: s.s.alvestad at ilos.uio.no

Linguistic Field(s): Applied Linguistics; Computational Linguistics;
Discourse Analysis; Forensic Linguistics; Text/Corpus Linguistics

Submission Deadline: 30-Dec-2025

This workshop will take a corpus- and computational-linguistics
perspective on fake news and related phenomena, where fake news is
defined along the axes of veracity and honesty, giving rise to three
types: 1) false but honest news, such as errors, which corresponds to
misinformation; 2) false and dishonest news, such as lies; and 3) true
but dishonest news, in which crucial pieces of information may be
omitted (so as to fit a certain narrative, as seen, arguably, in
propaganda), or in which true information may be taken out of context.
Fake news types 2) and 3) involve an intention to deceive and so
overlap with typical definitions of disinformation (see Grieve &
Woodfield, 2023).
Fake news and related information disorders can be harmful to our
societies. Specifically, when we change our beliefs and subsequent
behaviour based on false or misleading information it can harm our
health and lives, sow distrust (Funk et al., 2023), and disrupt
election processes (Jamieson 2018). Now, the societal challenge posed
by information disorders is amplified by the rapid development within
generative AI, exemplified by Large Language Models (LLMs), with the
launch of OpenAI’s ChatGPT in November 2022 as a significant
milestone. The output of LLMs depends on their training data, which
can contain inaccuracies and biases. As a result, these models may
unintentionally spread mis- or disinformation (Brandtzæg et al.,
2023). They can also produce “hallucinations”—convincing but false
statements (Spitale et al., 2023)—or partly incorrect content due to
unreliable sources (Chen et al., 2023). This blend of fabricated and
biased information makes it difficult to ensure the accuracy of online
content (Buchanan et al., 2021). Moreover, LLMs hold the potential to
generate misleading or false information at scale and at a quality
that makes it indistinguishable from similar content authored by
humans. Controlled experiments show that LLM-generated messages can
change policy attitudes, at times matching or surpassing human levels
of persuasiveness (Bai et al., 2025; Salvi et al., 2025). Research has
shown that people find it more difficult to identify disinformation
produced by AI than similar content produced by humans (Zhou et al.,
2023), and in simulated news recommendation systems, researchers have
found a new phenomenon referred to as “truth decay”, by which genuine
news increasingly falls behind LLM-generated mis- and disinformation
in visibility and ranking. This shift happens because LLM-generated
content typically shows lower perplexity, making it appear more fluent
and familiar. As a result, such content often receives higher
recommendation scores and greater visibility (Hu et al., 2025). This
dynamic has serious implications for the spread of mis- and
disinformation, since increased exposure can boost perceived
credibility through the illusory truth effect. All of this highlights
the need for effective identification and verification systems. We
believe that especially corpus and computational linguists should
recognize the urgency of the moment and hereby be invited to act.
Against this background, our workshop will shed light on the rising
societal challenge posed by information disorders from a corpus- and
computational-linguistics perspective. We welcome abstracts from both
branches of linguistics that examine LLM-generated as well as
human-authored fake news and other types of misleading or false
information in English, in comparison or separately, and similarly for
various types of LLMs. We ask questions including, but not limited to,
what the linguistic features are of such information disorders,
whether the disorders can be identified based on such features,
whether the features have changed, and are changing, over time, what
the capabilities and limitations of various LLMs are in the context of
producing and disseminating misleading information, whether the LLMs
have any fingerprint in the context of mis- and disinformation, and
how to develop a best practice for linguistic investigations of LLM
output. Abstracts can address theoretical as well as methodological
questions, take a comparative or case-focused approach, and examine
human-authored or LLM-generated text, or both.
Abstracts of a maximum of 400 words, excluding references, and
following the template available here:
https://wp.uni-koblenz.de/icame47/cfp/ should be sent to the workshop
organisers (Silje Susanne Alvestad, s.s.alvestad at ilos.uio.no, and Nele
Poldvere, nele.poldvere at ilos.uio.no) by December 30, with notification
of outcome on January 7. We kindly ask authors who intend to submit an
abstract to notify the workshop organisers of it as soon as possible
for planning purposes. Authors of accepted abstracts will be invited
to present their research at the pre-conference workshop at ICAME47 on
Tuesday, 26 May 2026.
References:
Bai, H., Voelkel, J., Muldowney, S., Eichstaedt, J., & Willer, R.
(2025). LLM-generated messages can persuade humans on policy issues.
Nature Communications, 16, Article 61345.
https://doi.org/10.1038/s41467-025-61345-5
Brandtzaeg, P. B. (2023). “Good” and “Bad” Machine Agency in the
Context of Human-AI Communication: The Case of ChatGPT. In
International Conference on Human-Computer Interaction, (pp. 3–23).
Springer Nature Switzerland.
Buchanan, B., Lohn, A., Musser, M. & Sedova, K. (2021). Truth, lies
and automation. How language models can change disinformation. Center
for Security and Emerging Technology, May 2021.
https://doi.org/10.51593/2021CA003
Chen, C. & Shu, K. (2023). Combating misinformation in the age of
LLMs: Opportunities and challenges.
https://doi.org/10.48550/arXiv.2311.05656
Funk, A., Shahbaz, A., & Vesteinsson, K. (2023). Freedom on the Net
2023. The repressive power of artificial intelligence. Freedom House
report.
https://freedomhouse.org/sites/default/files/2024-10/FOTN2023Final24.pdf
Grieve, J., & Woodfield, H. (2023). The Language of Fake News.
Cambridge Elements in Forensic Linguistics. Cambridge University Press
Hu, B., Sheng, Q., Cao, J., Li, Y., & Wang, D. (2025). LLM-generated
fake news induces truth decay in news ecosystem: A case study on
neural news recommendation. In Proceedings of the 48th International
ACM SIGIR Conference on Research and Development in Information
Retrieval (pp. 435–445). Association for Computing Machinery.
https://doi.org/10.1145/3726302.3730027
Jamieson, K. H. (2018). Cyberwar: How Russian hackers and trolls
helped elect a president. What we don’t, can’t, and do know. Oxford
University Press.
Salvi, F., Horta Ribeiro, M., Gallotti, R., & West, R. (2025). On the
conversational persuasiveness of GPT-4. Nature human behaviour, 9(8),
(pp. 1645–1653). https://doi.org/10.1038/s41562-025-02194-6
Spitale, G., Biller-Andorno, N., & Germani, F. (2023). AI model GPT-3
(dis)informs us better than humans. doi: 10.1126/sciadv.adh1850.
Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & Choudhury, M. D.
(2023). Synthetic Lies: Understanding AI-Generated Misinformation and
Evaluating Algorithmic and Human Solutions. In CHI ’23: Proceedings of
the 2023 CHI Conference on Human Factors in Computing Systems.
https://doi.org/10.1145/3544548.3581318



------------------------------------------------------------------------------

********************** LINGUIST List Support ***********************
Please consider donating to the Linguist List, a U.S. 501(c)(3) not for profit organization:

https://www.paypal.com/donate/?hosted_button_id=87C2AXTVC4PP8

LINGUIST List is supported by the following publishers:

Bloomsbury Publishing http://www.bloomsbury.com/uk/

Cambridge University Press http://www.cambridge.org/linguistics

Cascadilla Press http://www.cascadilla.com/

De Gruyter Brill https://www.degruyterbrill.com/?changeLang=en

Edinburgh University Press http://www.edinburghuniversitypress.com

John Benjamins http://www.benjamins.com/

Language Science Press http://langsci-press.org

Lincom GmbH https://lincom-shop.eu/

MIT Press http://mitpress.mit.edu/

Multilingual Matters http://www.multilingual-matters.com/

Narr Francke Attempto Verlag GmbH + Co. KG http://www.narr.de/

Netherlands Graduate School of Linguistics / Landelijke (LOT) http://www.lotpublications.nl/

Peter Lang AG http://www.peterlang.com


----------------------------------------------------------
LINGUIST List: Vol-36-3834
----------------------------------------------------------



More information about the LINGUIST mailing list