<html aria-label="message body"><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div><div>Dear colleagues,</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>A quick note to share some updates — and to flag that our <b>March 16 session is cancelled</b>. We'll be back in April with what promises to be a rich spring program.</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>The schedule for the coming months brings together a set of conversations that feel very much in the spirit of the network: on <b>April 20, Justine Zhang</b> will present "LLMs and the logistics imaginary"; on <b>May 18, Lisa Messeri</b> joins us with "AI Surrogates and illusions of generalizability in cognitive science"; and on <b>June 15, Zion Mengesha and Sharese King</b> will present some of their most recent collaborative work. We're very much looking forward to all three. Here’s the <a href="https://docs.google.com/document/d/1N-EKG2KeDi5ZbtZCdk97Vthgbg-srhtrOJnQC-a-gJU/edit?usp=sharing" target="_blank">link to the schedule</a>.</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>The spring sessions arrive alongside another milestone: the <b>special issue on Language Machines</b><b> in the <i>Journal of Linguistic Anthropology</i></b><b> is now out</b> and available here: <a href="https://anthrosource.onlinelibrary.wiley.com/doi/toc/10.1111/(ISSN)1548-1395.language-machines" target="_blank">https://anthrosource.onlinelibrary.wiley.com/doi/toc/10.1111/(ISSN)1548-1395.language-machines</a>. On <b>April 17</b> — just days before our first spring session — we'll be holding an <b>online launch event with the Society for Linguistic Anthropology</b>, so do stay tuned for details. And as a reminder, the special issue continues to accept submissions on a rolling basis; different formats are welcome, so please be in touch if you'd like to contribute.</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>Finally, we're pleased to share that our <b>open panel "Computational Code-Switching: Language, Labor, and Knowledge in the Age of LLMs" has been accepted for the 4S Annual Meeting in Toronto</b>. The panel takes as its starting point something easy to overlook: that LLMs are trained on corpora that make no distinction between natural and programming languages, with wide-ranging implications for how we think about labor, expertise, and the representation of knowledge. If any of this resonates with your own work, please do <a href="https://www.xcdsystem.com/4sonline/member/member_home.cfm" target="_blank">consider submitting an abstract</a>. The full panel description is below.</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>Looking forward to seeing you in April!</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>Best,</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>Anna Weichselbraun and Siri Lamoureaux</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>---</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div><b>Panel 101: Computational Code-Switching: Language, Labor, and Knowledge in the Age of LLMs</b></div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>Among fears about AI automating labor, the potential obsolescence of software engineers themselves presents an unanticipated irony. Today's coding assistants awe even the most hardened "code-as-craft" engineers. This capability stems from an under-appreciated fact in the social sciences and humanities: namely, that large language models (LLMs) are trained on massive corpora that typically do not distinguish between natural and programming languages. They can thus generate, translate, and blend both within the same interaction, raising questions about computational code-switching, register shifts across representational modes, and the transformation of human capacities to move across natural and programming languages.</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div>The success of language models in automating programming invites investigations into at least three domains critical to STS: (1) labor and automation (Light 1999, Suchman 2007, Forsythe 2001, Higgins 2007, Beltran 2023, Bialski 2024, Evans and Johns 2023), (2) language and code (Bowker 1993, Mackenzie 2006, Benjamin 2019, Gambetta 2009, Geoghegan 2023, Cohn 2019, Kockelman 2017, 2024) and (3) the representation of knowledge (Bowker & Star 1999, Lampland & Star 2009).</div><div><font color="#5856d6"><span style="caret-color: rgb(88, 86, 214);"><br></span></font></div><div dir="ltr"><div dir="auto" style="line-break:after-white-space"><div>We invite papers that investigate these domains in relation to LLMs or other communication technologies. (1) What types of engineering work and gendered/racial divisions of linguistic labor do LLMs necessitate in their development or application? How are expertise and professionalism changing in response? (2) How might contemporary AI's confluence of language and code require rethinking 20th century theories of information and language (e.g. cybernetics, information theory, structuralism, generative linguistics, and computational theory)? What do LLM's multilingual and multimodal capabilities mean for these relationships? (3) How might LLMs' gradient modeling of language and knowledge in high-dimensional vector space — rather than fixed taxonomies — challenge existing theorizations of information systems as classificatory infrastructure? What does "codification" mean under conditions of vectorization?</div></div></div>
</div><br></body></html>