[Linganth] Language Machines Reading Group: Spring Schedule + Special Issue News
Language Machines
languagemachinesnetwork at gmail.com
Wed Mar 4 17:52:42 UTC 2026
Dear colleagues,
A quick note to share some updates — and to flag that our March 16 session is cancelled. We'll be back in April with what promises to be a rich spring program.
The schedule for the coming months brings together a set of conversations that feel very much in the spirit of the network: on April 20, Justine Zhang will present "LLMs and the logistics imaginary"; on May 18, Lisa Messeri joins us with "AI Surrogates and illusions of generalizability in cognitive science"; and on June 15, Zion Mengesha and Sharese King will present some of their most recent collaborative work. We're very much looking forward to all three. Here’s the link to the schedule <https://docs.google.com/document/d/1N-EKG2KeDi5ZbtZCdk97Vthgbg-srhtrOJnQC-a-gJU/edit?usp=sharing>.
The spring sessions arrive alongside another milestone: the special issue on Language Machines in the Journal of Linguistic Anthropology is now out and available here: https://anthrosource.onlinelibrary.wiley.com/doi/toc/10.1111/(ISSN)1548-1395.language-machines. On April 17 — just days before our first spring session — we'll be holding an online launch event with the Society for Linguistic Anthropology, so do stay tuned for details. And as a reminder, the special issue continues to accept submissions on a rolling basis; different formats are welcome, so please be in touch if you'd like to contribute.
Finally, we're pleased to share that our open panel "Computational Code-Switching: Language, Labor, and Knowledge in the Age of LLMs" has been accepted for the 4S Annual Meeting in Toronto. The panel takes as its starting point something easy to overlook: that LLMs are trained on corpora that make no distinction between natural and programming languages, with wide-ranging implications for how we think about labor, expertise, and the representation of knowledge. If any of this resonates with your own work, please do consider submitting an abstract <https://www.xcdsystem.com/4sonline/member/member_home.cfm>. The full panel description is below.
Looking forward to seeing you in April!
Best,
Anna Weichselbraun and Siri Lamoureaux
---
Panel 101: Computational Code-Switching: Language, Labor, and Knowledge in the Age of LLMs
Among fears about AI automating labor, the potential obsolescence of software engineers themselves presents an unanticipated irony. Today's coding assistants awe even the most hardened "code-as-craft" engineers. This capability stems from an under-appreciated fact in the social sciences and humanities: namely, that large language models (LLMs) are trained on massive corpora that typically do not distinguish between natural and programming languages. They can thus generate, translate, and blend both within the same interaction, raising questions about computational code-switching, register shifts across representational modes, and the transformation of human capacities to move across natural and programming languages.
The success of language models in automating programming invites investigations into at least three domains critical to STS: (1) labor and automation (Light 1999, Suchman 2007, Forsythe 2001, Higgins 2007, Beltran 2023, Bialski 2024, Evans and Johns 2023), (2) language and code (Bowker 1993, Mackenzie 2006, Benjamin 2019, Gambetta 2009, Geoghegan 2023, Cohn 2019, Kockelman 2017, 2024) and (3) the representation of knowledge (Bowker & Star 1999, Lampland & Star 2009).
We invite papers that investigate these domains in relation to LLMs or other communication technologies. (1) What types of engineering work and gendered/racial divisions of linguistic labor do LLMs necessitate in their development or application? How are expertise and professionalism changing in response? (2) How might contemporary AI's confluence of language and code require rethinking 20th century theories of information and language (e.g. cybernetics, information theory, structuralism, generative linguistics, and computational theory)? What do LLM's multilingual and multimodal capabilities mean for these relationships? (3) How might LLMs' gradient modeling of language and knowledge in high-dimensional vector space — rather than fixed taxonomies — challenge existing theorizations of information systems as classificatory infrastructure? What does "codification" mean under conditions of vectorization?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/linganth/attachments/20260304/ba6839a7/attachment.htm>
More information about the Linganth
mailing list