[Lingtyp] "AI" and linguistics problem sets
Ka Yau Lai
KaYau.Lai at uga.edu
Mon Nov 10 04:49:22 UTC 2025
Dear all,
I read this discussion with great interest, and thought I would mention my perspective as well. I would like to echo many of the points raised here, such as Dr LaPolla's suggestion that abductive reasoning take a bigger role in assignments, as well as Dr Spronck's suggestion of issuing assignments where students critique how LLMs are doing.
Nevertheless, I am hesitant to take the route of in-class writing, as I still believe that such assessments can be testing students' test-taking ability as much as their familiarity with the skills and knowledge being tested, and encourage cramming before assessments over more sustained effort. Moreover, sadly, the latest AI glasses (which have become increasingly easy to use undetected) have also begun to show up in examination contexts, so proctoring to avoid cheating in a larger classroom may also become more and more difficult in the years to come. As linguistic analysis is not typically completed with the types of time pressure required in an examination context, and is often completed not by linguists working in isolation but in community with others, I am still in favour of take-home assignments as the main assessment method, allowing students to take their time, think their analyses through, and consult with their peers appropriately.
I believe that — at least for now — a good path forward could involve reworking assignments to focus on naturally-occurring and spontaneous spoken/signed discourse data (including detailed transcription of linguistic details like 'disfluencies' and repair), as well as incorporating linguistic annotation software. LLMs' ability to handle such natural, spontaneous data still lags behind its ability to handle standard written/elicited data, and while it may be possible in certain types of scenarios to produce linguistic annotations with LLMs that can then be loaded into annotation software, I suspect students who use LLMs to cheat will not typically have the technical skills to do so. The use of naturalistic discourse data has further benefits, such as allowing students to see how grammatical phenomena take place in real-life settings, and facilitating students in making abductive inferences about how a language user decided to choose a particular form or construction at some point in the discourse. The use of linguistic annotation allows students to 'externalise' their analysis in a clear and unambiguous format (even if their writing may not otherwise be clear), and trains them in important skills that can make them attractive in the tech sector.
In our discourse and conversation analysis classes at UC Santa Barbara a few years back (with Jack DuBois and later Cedar Brown), we took precisely such an approach: students were required to work on conversational transcripts using the Rezonator software. I did not see clear evidence of LLM use, though admittedly this was during the days when ChatGPT had just became popular with the general public and still lagged heavily behind in linguistic analysis. (I find that for certain types of abductive reasoning, LLMs have actually gotten quite good, though this is heavily dependent on the topic - it gets things right enough of the time when analysing, for example, repair to produce something that might be B-C level work, but remains shockingly incapable of identifying adjacency pairs or sequence expansions in general.) I plan to teach in a similar way in my course next spring, with clearer guidance on disallowed AI use, and will see how well this holds up.
Sincerely,
Ryan
---
Ryan Ka Yau Lai
Assistant Professor, University of Georgia
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20251110/f50fc8a9/attachment.htm>
More information about the Lingtyp
mailing list