[Linganth] Correct abstract: LLMs and Ling Anth Reading Group - April 17 - Maria Erofeeva and Nils Klowait
Language Machines
languagemachinesnetwork at gmail.com
Wed Mar 5 10:30:26 UTC 2025
Here is the correct abstract for Nils and Maria's paper with my apologies
for the error:
*Title: Nonhuman Situational Enmeshment – How Participants Build Temporal
Infrastructures for ChatGPT*
*Abstract: Contemporary interactions with large language models articulate
an often implicit sense of what language-based interaction is and does. By
chunking interactional contributions into discrete diachronic ‘messages’,
practitioners might lose track of key emergent and processual
characteristics of social interaction, such as turn-taking dynamics, local
sensemaking and the co-constructive nature of talk. The default interface
of ChatGPT erases much of the advancement of our understandings of
multimodal interaction through the form in which conversational
contributions are piped through the system, whilst also bracketing as
irrelevant phenomena that are not explicitly circumscribed by the interface
setup. Building upon a corpus of real-life user interactions with a
ChatGPT-like LLM interface, we investigate how these limited definitions of
language are resisted, transmuted, and expanded by the participants
themselves – both within the purely dialogical domain of the LLM
interaction, as well as contextualizing the latter through the embodied
sensemaking that occurs outside of the LLM’s view. Drawing on Charles
Goodwin’s concept of co-operative action, we examine how participants
recruit, integrate, and structure the temporal participation of LLMs across
different modes of engagement. We argue that an LLM’s situational
participation is not an essential property but an emergent feature of
social coordination. Our analysis shows how participants construct distinct
temporal architectures for the LLM to inhabit. The findings suggest that as
AI systems become more multimodal, users will increasingly face new
challenges in organizing co-temporality with adaptive, malleable AI agents.*
Best,
Anna
On Wed, Mar 5, 2025 at 10:15 AM Language Machines <
languagemachinesnetwork at gmail.com> wrote:
> Dear Language Machines Network,
>
>
> We'd like to invite you to our next reading group meeting. We'll discuss
> Maria Erofeeva (Free University of Brussels) and Nils Klowait’s (Paderborn)
> draft article "Nonhuman Situational Enmeshment – How Participants Build
> Temporal Infrastructures for ChatGPT" on *Monday, March 17th, 2025 from
> 18:00-20:00 CET*. See below for the schedule of future meetings.
>
>
> *Abstract: The ability of fine-tuned large-language models (LLMs) to
> generate fluent text challenges our understandings of language-in-use. How
> can LLMs produce meaningful text without living in the world as humans do?
> This paper argues that (1) textually mediated human-machine interaction
> requires no shared context beyond what is already indexed by the
> collaboratively produced (co-)text, and (2) it is sufficient for the human
> to appreciate and advance the resolution of misunderstandings. To
> substantiate these claims, the paper contrasts a pragmatic understanding of
> meaning with criticism from linguistic anthropology and empirical findings
> from conversation analysis. Participants that interact through chat are not
> physically co-present and often share a limited personal history.
> References to physical presence or experiences not shared thus need to be
> made explicit in the (co-)text. Conversation analysis further shows that
> many of the potential misinterpretations imagined by semantic analysis in
> scenarios of co-presence are not relevant in practice due to the co-text
> utterances are embedded in. Participants project understandings through
> statements and responses, and the absence of evidence of misunderstanding
> is sufficient for the interaction to progress. When there is evidence of a
> misunderstanding among humans, i.e. a failure to interpret intentions, this
> is dealt with procedurally and step-by-step through the repair
> organization. Accordingly, failures by LLMs to fully participate in repair
> can be compensated by the human interactant procedurally. In short,
> repairing a misguided interaction is an occasional and procedural
> collaborative activity where interpretation can remain limited to the human
> side of a human-machine interaction.*
>
>
> RSVP here: languagemachinesnetwork at gmail.com (e.g. reply to this email)
> to receive the Zoom link and the PDF of the article when it is available
> (not to be circulated, please).
>
>
> We look forward to seeing you there!
>
>
> Anna, Siri, Michael
>
>
> April 21 - Zachary Sheldon (Pittsburgh)
>
> May 19 - Janet Connor (Leiden)
>
> June 16 - Raffaele Buono (UCL)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/linganth/attachments/20250305/393fa81b/attachment-0001.htm>
More information about the Linganth
mailing list