36.1371, Calls: AI-Linguistica - Special Issue: “The Notion of Authenticity in Human/AI Hybrid Productions.” (Jrnl)

The LINGUIST List linguist at listserv.linguistlist.org
Sat Apr 26 00:05:02 UTC 2025


LINGUIST List: Vol-36-1371. Sat Apr 26 2025. ISSN: 1069 - 4875.

Subject: 36.1371, Calls: AI-Linguistica - Special Issue: “The Notion of Authenticity in Human/AI Hybrid Productions.” (Jrnl)

Moderator: Steven Moran (linguist at linguistlist.org)
Managing Editor: Justin Fuller
Team: Helen Aristar-Dry, Steven Franks, Joel Jenkins, Daniel Swanson, Erin Steitz
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Editor for this issue: Erin Steitz <ensteitz at linguistlist.org>

================================================================


Date: 24-Apr-2025
From: Sophia Burnett [sophia.burnett at univ-lorraine.fr]
Subject: AI-Linguistica - Special Issue: “The Notion of Authenticity in Human/AI Hybrid Productions.”


Journal: AI-Linguistica
Issue: Special Issue: “The Notion of Authenticity in Human/AI Hybrid
Productions.”
Call Deadline: 31-May-2025

Call for Papers:
Editors : Sophia Burnett (Université de Lorraine) et Sílvia Lima
Gonçalves Araújo (Universidade do Minho)
The notion of authenticity is linked to those of identity and truth.
Today, it is being reconfigured in the context of hybrid productions
between humans and artificial intelligence. While the general public
tends to perceive artificial intelligence as a whole greater than the
sum of its parts, it is important to recall that this
gestalt—referring merely to the generative capacity of computational
models (LLMs)—does not produce language grounded in embodied
experience (Burnett, 2024), but rather draws on billions of tokens
from disparate sources (Zhao et al., 2023), often collected without
authorization (Baack et al., 2025). In other words, these productions
are amalgams of signs, symbols, or images, resulting from statistical
calculations and not from lived experiences or embodied reflections.
The authenticity of hybrid productions is an issue that brings
together cognitivists and generativists. Lakoff (1986) contrasted
computational production and human production, rejecting the
computer-brain model, and according to Chomsky et al., (2023), “we
know from the science of linguistics and the philosophy of knowledge
that they differ profoundly from how humans reason and use language”.
In order to apply any analysis of authenticity to the examination of
hybrid productions, we must first critically interrogate the very
meaning of authenticity. To do so, we draw on an epistemic framework
that predates the emergence of LLMs. For a more comprehensive
introduction to the notion of authenticity, we suggest Lindholm
(2013). Trilling (1974) frames the evolution of authenticity as a
derivation from sincerity. Handler (1986) argues that authenticity is
not an innate property, but a discursive construction mobilized, for
example, by nationalists to assuage anxieties around continuity and
legitimation. Linnekin (1991), drawing on a Maori case-study, shows
that so-called “authentic” traditions are in fact dynamic,
interpreted, and politically invested: authenticity becomes a
narrative rather than a reproduction of empirical reality. Lindholm
(2013), addressing authenticity in the digital era—prior to the
emergence of LLMs—in the context of early online banking and other
official forms of digital authentication, writes: “Anxiety about the
validity of experience and about the maintenance of personal identity
is at the core of this computerized definition.” Drawing on linguistic
and semiotic frameworks, van Leeuwen (2001) offers several responses
to the question, “What is authenticity?” He observes that media tend
to reproduce and reinforce the idea that authenticity is concealed
behind masks, only to be revealed in order to produce an effect of
realism within a saturated media landscape. This highlights the
paradox of authenticity in late modernity: it must appear spontaneous,
even though it is often carefully assembled.
This special issue of AI Linguistica examines how the notion of
authenticity is maintained, transformed or redefined in the practice
of human/AI hybrid productions. The term “hybrid” here refers to the
dynamic interaction between human agents and artificial intelligence
systems, in which the production, interpretation, or mediation of
language results from distributed co-agency between human cognition
and algorithmic computation. We invite contributions that question and
analyze the notion through approaches that mobilize: language sciences
(Beguš et al., 2023, De Cesare, 2023; Dynel, 2023; Meier, 2024;
Weissweiler, 2024), translation studies (Li et al., 2025; Xu et al.,
2025), literary/narratology studies (Beguš, 2024; Chakrabarty et al.,
2024; Koivisto & Grassini, 2023), discursivity/discourse analysis
(Merton, 1968; Liu et al., 2025; Lehner, 2025; Yoo et al., 2024),
NLP/computational linguistics (Liu et al., 2024), didactics (Alrahabi
et al., 2022; Ifelebuegu, 2023; Werdiningsih et al., 2024) and
cognition (Carrasco-Farre, 2024; Grindrod, 2024; Wang et al., 2025).
This publication will address questions such as (but not limited to,
as there are evidently myriad unexplored questions): How does
authenticity manifest itself in the linguistic objects produced within
hybrid productions? How do students negotiate the reappropriation of
authorial borrowing in their academic writing? Does a fiction
conceived by a human being—creative unreality—differ in value from
that of a proposition generated by a computational model, and can we
qualify this difference? Where does the notion of authenticity fit
into a hybrid translation process, understood as transfer but also as
re-enunciation? What form(s) does authenticity take in public,
political or institutional discourse when it incorporates hybrid
productions? (How) can hybrid corpus annotation be a site of
epistemological scrutiny? We invite you to submit your reflections in
one of the following axes:
I. Ideologies, intentionality, reproductions and circulations
The examination of the ideological, political, social and cultural
dimensions of the notion of authenticity in hybrid productions and
discourses, taking into account power dynamics as well as societal
expectations or effects related to these productions. It also invites
critical perspectives on linguistic diversity, performativity, and
speaker positionality.
II. Cognition, Social Interaction, and the Co-construction of Meaning
How authenticity can be approached from a human cognitive perspective
within hybrid and pragmatic collaboration, through events such as
social interaction, intentionality, and the co-construction of
meaning, in both private and public settings.
III. Natural Language Processing and algorithms
How processes of qualification, identification, and simulation of
authenticity are operationalized within algorithmically generated
texts. Areas of interest include annotation practices for
authenticity-related labels, treatment of non-standard linguistic
features, contrastive studies of artefact retention across training
paradigms, and challenges in maintaining version authenticity in
multilingual alignments. Stylometric and forensic approaches to AI
discourse, as well as corpus analyses of the perceptions of
authenticity across model iterations, are also relevant.
Submission
Please send your proposals for articles in English or French. Maximum
500 words, and up to 5 keywords, including your name, email address,
and affiliation. Send in PDF format to both
sophia.burnett at univ-lorraine.fr and saraujo at elach.uminho.pt by May
31st 11h59 CET. If pertinent to your approach, please do include data
source, tools, and any expected (provisional) results. Your
proposition can be for two types of articles: short-length articles
(between 3,000 and 6,000 words) and full-length articles (between
8,000 and 15,000 words).
Please state in your initial proposal if you intend to submit a short
or a full-length article. Languages of study may include any, with a
particular focus on Romance and Germanic languages. Contributions may
be empirical or theoretical, provided they engage substantively with
authenticity in language and discourse in the context of AI mediation.
Each proposal submitted will be peer-reviewed by two external
reviewers. The reviewing process of the ensuing papers takes place in
the form of a double-anonymized peer review. Based on the reviewers'
reports, the editors of this special issue decide whether to accept
(with minor or major revisions), to ask for revisions and resubmit for
review, or reject the publication proposal submitted. Full information
on the journal website: https://ai-ling.publia.org/ai_ling/about
Timeline
Call for Papers launched: April 24, 2025
Article propositions due (500 words): May 31, 2025
Notification of acceptance of propositions: June 10, 2025 (Does not
mean acceptance of full papers).
Full articles due: August 30, 2025
Peer review period: September 1 – October 20, 2025
Reviewer reports returned to authors (Accepted/Rejected): October 25,
2025
Final revised versions due: November 20, 2025
Publication: Mid-December 2025
AI-Linguistica. Linguistic Studies on AI-Generated Texts and
Discourses is a Diamond Open Access journal. All content is published
under a Creative Commons License (CC-BY-NC-SA 4.0), at no costs for
the authors.
References
Alrahabi, Motasem, Roe, Glenn, Bordry, Marguerite, et al. 2022. Des
étudiants en lettres face aux humanités numériques: une expérience
pédagogique. Humanités numériques, no 5.
https://dx.doi.org/10.4000/revuehn.2775
Baack, Stefan, Biderman, Stella, Odrozek, Kasia, et al. 2025. “Towards
Best Practices for Open Datasets for LLM Training”. arXiv preprint
arXiv:2501.08365. https://doi.org/10.48550/arXiv.2501.08365
Baugh, Bruce. 1988. Authenticity revisited. The Journal of Aesthetics
and Art Criticism, 46(4), 477-487. https://doi.org/10.2307/431285
Beguš, Gašper, Maksymilian Dąbkowski, and Ryan Rhodes. 2023. "Large
linguistic models: Analyzing theoretical linguistic abilities of
LLMs." arXiv preprint arXiv:2305.00948
https://doi.org/10.48550/arXiv.2305.00948
Beguš, Nina. 2024. “Experimental narratives: A comparison of human
crowdsourced storytelling and ai storytelling.” Humanities and Social
Sciences Communications 11.1:
1-22.https://doi.org/10.1057/s41599-024-03868-8
Burnett, Sophia. 2024. The embodied non-standard 1SG as a potential
marker for reflective function impairment in Anorexia Nervosa
sufferers. https://doi.org/10.31219/osf.io/gz72t
Carrasco-Farre, Carlos. 2024. Large language models are as persuasive
as humans, but how? About the cognitive effort and moral-emotional
language of LLM arguments. arXiv preprint arXiv:2404.09329
.https://doi.org/10.48550/arXiv.2404.09329
Chakrabarty, Tuhin, Laban, Philippe, Agarwal, Divyansh, et al. 2024.
Art or artifice? large language models and the false promise of
creativity. In : Proceedings of the 2024 CHI Conference on Human
Factors in Computing System. p.1-34.
https://doi.org/10.48550/arXiv.2309.14556
Chomsky, Noam, Ian Roberts, and Jeffrey Watumull. 2023. Noam Chomsky:
The false promise of chatGPT. The New York Times 8.
Damasio, Antonio. 1999. The feeling of what happens: Body and emotion
in the making of consciousness. Harcourt College Publishers.
De Cesare, Anna-Maria. 2023. Assessing the quality of ChatGPT’s
generated output in light of human-written texts: A corpus study based
on textual parameters. CHIMERA: Revista de Corpus de Lenguas Romances
y Estudios Lingüísticos 10: 179-210.
https://revistas.uam.es/chimera/article/view/17979
Dennett, Daniel. 1991. Consciousness Explained. Little, Brown and Co.
Boston
Dynel, Marta. 2023. Lessons in linguistics with ChatGPT:
Metapragmatics, metacommunication, metadiscourse and metalanguage in
human-AI interactions. Language & Communication 93: 107-124.
https://doi.org/10.1016/j.langcom.2023.09.002
Giddens, Anthony. 1991. Modernity and Self-Identity. Stanford
University Press, CA.
Grindrod, Jumbly. 2024. Large language models and linguistic
intentionality. Synthese 204, 71 (2024).
https://doi.org/10.1007/s11229-024-04723-8
Handler, Richard. 1986. Authenticity. Anthropology Today, 2(1), 2–4.
https://doi.org/10.2307/3032899
Ifelebuegu, Augustine. 2023. Rethinking online assessment strategies:
Authenticity versus AI chatbot intervention. Journal of Applied
Learning and Teaching, 6(2), 385–392.
https://doi.org/10.37074/jalt.2023.6.2.2
Koivisto, Mika, & Simone Grassini. 2023. Best humans still outperform
artificial intelligence in a creative divergent thinking task.
Scientific Reports, 13, Article 13601.
https://doi.org/10.1038/s41598-023-40858-3
Latour, Bruno. 2010. Reassembling the social: An introduction to
actor-network-theory. Oxford University Press.
Lakoff, George. 2008. Women, fire, and dangerous things: What
categories reveal about the mind. University of Chicago press.
Chicago.
Lehner, Sabine. 2025. The design and discursive construction of a
‘speaking’ vacuum cleaning robot for assistive purposes: Findings on
communication ideologies from a current research and development
project. AI-Linguistica. Linguistic Studies on AI-Generated Texts and
Discourses, 2(1). https://doi.org/10.62408/ai-ling.v2i1.16
Lindholm, Charles. 2013. The rise of expressive authenticity.
Anthropological Quarterly, 86(2), 361–395.
https://www.jstor.org/stable/41857330
Linnekin, Jocelyn. 1991. Cultural invention and the dilemma of
authenticity. American Anthropologist, 93(2), 446–449.
https://doi.org/10.1525/aa.1991.93.2.02a00120
Li, Yafu, Zhang, Ronghao, Wang, Zhilin, et al. 2025. Lost in
Literalism: How Supervised Training Shapes Translationese in LLMs.
arXiv preprint arXiv:2503.04369, https://arxiv.org/abs/2503.04369
Liu, Yiheng, He, Hao, Han, Tianle, et al. 2024. Understanding llms: A
comprehensive overview from training to inference. Neurocomputing, p.
129190. https://doi.org/10.1016/j.neucom.2024.129190
Liu, Yifei, Yuang Panwang, and Chao Gu. 2025.“’Turning right?’ An
experimental study on the political value shift in large language
models.” Humanities and Social Sciences Communications 12.1: 1-10.
https://doi.org/10.1057/s41599-025-04465-z
Meier, Franz. 2024. Dealing with common ground in Human Translation
and Neural Machine Translation: A case study on Italian equivalents of
German modal particles. AI-Linguistica. Linguistic Studies on
AI-Generated Texts and Discourses, 1(1).
Merton, Robert K. 1968. The Matthew effect in science: The reward and
communication systems of science are considered. Science 159.3810:
56-63. https://doi.org/10.1126/science.159.3810.56
Pontille, David. 2020. La signature scientifique: Une sociologie
pragmatique de l’attribution. CNRS Éditions via OpenEdition.
https://doi.org/10.4000/books.editionscnrs.31558
Taylor, Charles. 1992. The ethics of authenticity. Harvard University
Press. Cambridge.
Trilling, Lionel. 1974. Sincerity and Authenticity. Harvard University
Press. Cambridge.
Van Leeuwen, Theo. 2001. What is authenticity? Discourse Studies,
3(4), 392–397. https://doi.org/10.1177/1461445601003004003
Wang, Yifei, Eshghi, Ashkan, Ding, Yi, et al. 2025. Echoes of
authenticity: Reclaiming human sentiment in the large language model
era. PNAS nexus, vol. 4, no 2, p.34.
https://doi.org/10.1093/pnasnexus/pgaf034
Weissweiler, Leonie, Abdullatif Köksal, and Hinrich Schütze. 2024.
“Hybrid Human-LLM corpus construction and LLM evaluation for rare
linguistic phenomena.” arXiv preprint arXiv:2403.06965.
https://arxiv.org/abs/2403.06965
Werdiningsih, Indah, Marzuki, & Diyenti Rusdin. 2024. Balancing AI and
authenticity: EFL students’ experiences with ChatGPT in academic
writing. Cogent Arts & Humanities 11.1: 2392388.11(1).
https://doi.org/10.1080/23311983.2024.2392388
Xu, Yuemei, Hu, Ling, Zhao, Jiayi, et al. 2025. A survey on
multilingual large language models: Corpora, alignment, and bias.
Frontiers of Computer Science, vol. 19, no 11, p. 1911362.
https://doi.org/10.1007/s11704-024-40579-4
Yoo, Dahey, Kang, Hyunmin, & Oh, Changhoon. 2025. Deciphering
deception: How different rhetoric of AI language impacts users’ sense
of truth in LLMs. International Journal of Human–Computer Interaction,
vol. 41, no 4, p. 2163-2183.
https://doi.org/10.1080/10447318.2024.2316370
Zhao, Wayne Xin, Zhou, Kun, Li, Junyi, et al. 2023. A survey of large
language models. arXiv preprint arXiv:2303.18223, vol. 1, no 2.
https://doi.org/10.48550/arXiv.2303.18223
All links visited 18.04.2025

Linguistic Field(s): Cognitive Science
                     Computational Linguistics
                     Discourse Analysis
                     General Linguistics
                     Translation

Subject Language(s): English (eng)
                     French (fra)
                     German (deu)
                     Italian (ita)
                     Spanish (spa)




------------------------------------------------------------------------------

********************** LINGUIST List Support ***********************
Please consider donating to the Linguist List to support the student editors:

https://www.paypal.com/donate/?hosted_button_id=87C2AXTVC4PP8

LINGUIST List is supported by the following publishers:

Bloomsbury Publishing http://www.bloomsbury.com/uk/

Cambridge University Press http://www.cambridge.org/linguistics

Cascadilla Press http://www.cascadilla.com/

De Gruyter Mouton https://cloud.newsletter.degruyter.com/mouton

Edinburgh University Press http://www.edinburghuniversitypress.com

Elsevier Ltd http://www.elsevier.com/linguistics

John Benjamins http://www.benjamins.com/

Language Science Press http://langsci-press.org

Lincom GmbH https://lincom-shop.eu/

Multilingual Matters http://www.multilingual-matters.com/

Netherlands Graduate School of Linguistics / Landelijke (LOT) http://www.lotpublications.nl/

Oxford University Press http://www.oup.com/us

Wiley http://www.wiley.com


----------------------------------------------------------
LINGUIST List: Vol-36-1371
----------------------------------------------------------



More information about the LINGUIST mailing list