[Lingtyp] "AI" and linguistics problem sets
JOO Ian
joo at res.otaru-uc.ac.jp
Wed Nov 5 06:57:04 UTC 2025
Dear Mark,
I simply don’t give any take-home assignments (in fact I don’t think I ever did). All evaluations in my courses are either closed-book exams or chalk-talks (presentations without slides). Not only is this an easy solution to the LLM issue, but also it lessens the burdens of the university students, who usually already have a lot going in their lives.
My rule of thumb is that using LLM for whatever purpose is okay as long as you know what you’re doing and you can be fully responsible for the end result. So in general I don’t see the point of employing LLM for courses whose goal is to precisely help the students understand what they’re doing.
From Otaru, Japan,
Ian
- - - - - - - - - - - - - - - - - - - - - - -
朱 易安
JOO, IAN
准教授
Associate Professor
小樽商科大学
Otaru University of Commerce
🌐 ianjoo.github.io<http://ianjoo.github.io/>
- - - - - - - - - - - - - - - - - - - - - - -
보낸 사람: Lingtyp <lingtyp-bounces at listserv.linguistlist.org>이(가) 다음 사람 대신 보냄: Mark Post via Lingtyp <lingtyp at listserv.linguistlist.org>
날짜: 수요일, 2025년 11월 5일 08:29
받는 사람: typology list <lingtyp at listserv.linguistlist.org>
주제: [Lingtyp] "AI" and linguistics problem sets
Dear Listmembers,
I trust that most lingtyp subscribers will have engaged with “problem sets” of the type found in Language Files, Describing Morphosyntax, and my personal favourite oldie-but-goodie the Source Book for Linguistics. Since the advent of ChatGPT, I’ve been migrating away from these (and even edited/obscured versions of them) for assessments, and relying more and more on private/unpublished data sets, mostly from languages with lots of complex morphology and less familiar category types, that LLMs seemed to have a much harder time with. This was not an ideal situation for many reasons, not least of which being that these were not the only types of languages students should get practice working with. But the problem really came to a head this year, when I found that perhaps most off-the-shelf LLMs were now able to solve almost all of my go-to problem sets to an at least reasonable degree, even after I obscured much of the data.
Leaving aside issues around how LLMs work, what role(s) they can or should (not) play in linguistic research, etc., I’d like to ask if any listmembers would be willing to share their experiences, advice, etc., specifically in the area of student assessment in the teaching of linguistic data analysis, and in particular morphosyntax, in the unfolding AI-saturated environment. Is the “problem set” method of teaching distributional analysis irretrievably lost? Can it still be employed, and if so how? Are there different/better ways of teaching more or less the same skills?
Note that I would really like to avoid doomsdayisms if possible here (“the skills traditionally taught to linguists have already been made obsolete by AIs, such that there’s no point in teaching them anymore” - an argument with which I am all-too-familiar), and focus, if possible, on how it is possible to assess/evaluate students’ performance under the assumption that there is at least some value in teaching at least some human beings how to do a distributional analysis “by hand” - such that they are actually able, for example, to evaluate a machine’s performance in analysing a new/unfamiliar data set, and under the further assumption that assessment/evaluation of student performance in at least many institutions will continue to follow existing models.
Many thanks in advance!
Mark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20251105/36423a69/attachment.htm>
More information about the Lingtyp
mailing list