[Lingtyp] "AI" and linguistics problem sets
Maxime Fily
maxime.fily at gmail.com
Fri Nov 7 22:20:05 UTC 2025
Dear Mark,
I completely second everything that Randy said about AI : it's a machine
trained to do statistical predictions based on cost functions. By lowering
the cost of the next predicted word, they output contextual answers which
are located in a sort of local minimum in their representations' space.
Sorry for the tedious intro, but it's important for what I'm about to say:
sentences output by LLM are plain and boring, by essence. You can tell
actually if it's AI generated because all the word in one sentence have a
very high probability of co-occurrence, which does not occur that
systematically with a human.
So, it means that on top of telling your students all the cognitive benefit
there is in not using AI all the time when learning new stuff, you can also
tell them that if they use AI too much it'll show (it's actually super easy
to detect) and it'll reflect poorly on their work.
I'm not saying that AI should not be used, for example it's very helpful
for coding when you already know how to code and want to do more complex
programs. Likewise, asking AIs about broad topic overviews like "give me an
outline of the history of China from the Han Dynasty to the Tang Dynasty",
it will most likely give out a very readable memo which will be helpful if
you're looking for general information. It won't be perfect but it'll
definitely save time.
Lastly, a word on the reply by Stela: AIs do not "learn" or "remember"
stuff. Stela, if you use words like that, even as a figure of speech, you
propagate false ideas about LLMs. Also, I would be very careful on doing
experiments with ChatGPT: prompting the recent ChatGPT versions and
discussing the results is just wrong: you can use it as a tool but never as
a research object. First of all because even if you write down the version
you're using, openai now updates the weights on-the-fly and you'll never be
able to reproduce your research. And second of all, and maybe more
importantly, evaluating LLMs requires that you ask how prompting can
actually tell stuff about the model itself, like how it handles
long-distance dependencies. So you need careful prompts, frozen models
(yes, plural), and an understanding of the inner workings of the models to
ask yourself the right questions. If not, then you're just toying with a
model, which is fine, but it's not science. We're at the beginning of the
AI practice, so it's okay to get carried away by the promise of AI, but
let's not get carried away too much. It's just ones and zeros.
Best,
Maxime
Le ven. 7 nov. 2025 à 09:48, Stela MANOVA via Lingtyp <
lingtyp at listserv.linguistlist.org> a écrit :
> Apologies for the oversight. The previous version contained mark-ups and
> may have looked incomprehensible in plain-text format.
>
> Here is a more reader-friendly version:
>
> Dear colleagues,
>
> The answer is more complex than a linguist may suppose, since many things
> matter:
> In which script the examples are given.
> Different scripts have different representations, e.g., a Latin letter is
> one byte, a Cyrillic letter is two bytes, an Arabic letter is three bytes,
> etc. Consequently, languages are tokenized differently, which reflects on
> the correctness of the “linguistic” analysis.
>
> The way one approaches the task.
> If students have been given examples in class or additional materials with
> data, they can give these examples to ChatGPT to introduce it to the task,
> and the result will be different, too. One can even ask ChatGPT to write a
> short Python program to improve the performance — and it will.
>
> The exact formulation of the prompt.
> Roughly, if you ask directly, the result will be one; if you start from
> afar, the result will be different.
>
> Shared representational space.
> ChatGPT uses a single highly dimensional space to represent all languages
> and can “analogize,” as mentioned in the email by Liberty.
>
> The available literature on the topic in the training data.
> This has already been discussed.
>
> The linguistic fine-tuning of the model
> How diligent the human linguist working with the model was (this is why it
> seems that Gemini is more linguistically competent than ChatGPT, but Gemini
> has a memory issue: it quickly forgets things learned earlier, and it is
> therefore often not so good for testing what I explain in 2).
>
> Etc.
>
> I will discuss all these issues in my Linguistics Meets ChatGPT workshop
> series. A simple rule of thumb: ChatGPT does not have linguistic units, so
> asking it to count phonemes, morphemes, or words relatively quickly gives
> wrong results — even for English. This can be used as a straightforward
> testing strategy.
>
> Best,
>
> Stela
>
>
> On 07.11.2025, at 08:33, Stela MANOVA <manova.stela at gmail.com> wrote:
>
> Dear colleagues,
>
> The answer is more complex than a linguist may suppose, since many things
> matter:
>
> 1.
>
> In which script the examples are given. Different scripts have
> different representations, e.g., a Latin letter is one byte, a Cyrillic
> letter is two bytes, an Arabic letter is three bytes, etc. Consequently,
> languages are tokenized differently, which reflects on the correctness of
> the “linguistic” analysis.
> 2. The way one approaches the task. If students have been given
> examples in class or additional materials with data, they can give these
> examples to ChatGPT to introduce it to the task, and the result will be
> different, too. One can even ask ChatGPT to write a short Python program to
> improve the performance — and it will.
> 3. The exact formulation of the prompt. Roughly, if you ask directly,
> the result will be one; if you start from afar, the result will be
> different.
> 4. Shared representational space. ChatGPT uses a single highly
> dimensional space to represent all languages and can “analogize,” as
> mentioned in the email by Liberty.
> 5. The available literature on the topic in the training data. This
> has already been discussed.
> 6.
>
> The linguistic fine-tuning of the model, i.e., how diligent the human
> linguist working with the model was (this is why it seems that Gemini is
> more linguistically competent than ChatGPT, but Gemini has a memory issue:
> it quickly forgets things learned earlier, and it is therefore often not so
> good for testing what I explain in 2).
>
> Etc.
>
> I will discuss all these issues in my Linguistics Meets ChatGPT workshop
> series. A simple rule of thumb: ChatGPT does not have linguistic units, so
> asking it to count phonemes, morphemes, or words relatively quickly gives
> wrong results — even for English. This can be used as a straightforward
> testing strategy.
>
> Stela
>
>
>
> On 07.11.2025, at 07:24, Liberty Lidz via Lingtyp <
> lingtyp at listserv.linguistlist.org> wrote:
>
> Hi all,
>
> Thank you for this very helpful discussion and it is heartening to see the
> hard work that different scholars are putting into improving pedagogical
> methods given the wrench that LLMs have put into things. A few small
> thoughts: although Nepali certainly is a less-commonly-taught language,
> many of the large tech companies treat Indics by bootstrapping from Hindi,
> for which there is much, much more data, to other members of the family,
> including varieties like Nepali and Assamese, for which there are much less
> data. The same is to some degree true for other language families, but a
> bit dependent upon the perceived size and socioeconomic value of the
> speaker populations and the complexity of adding the language varieties to
> the models for the given company. Another thing to consider is that
> companies developing LLMs have essentially scraped the entire internet of
> scrapable data (incredible as this may seem), so if there are books,
> dissertations, or journal articles on a language that are available on the
> open internet, they have almost certainly been scraped to train the LLMs.
> Linguists have worked really hard in the last almost two decades to make
> publications open access or otherwise freely available so that members of
> native language communities and other scholars can have access to them, so
> there is a huge amount of language data out there.
>
> Best,
>
> Liberty
>
> On Thursday, November 6, 2025 at 09:12:01 PM PST, Spike Gildea via Lingtyp
> <lingtyp at listserv.linguistlist.org> wrote:
>
>
> Hi all,
>
> This last summer, a team here at the University of Oregon tested a number
> of assignments from across the Humanities for susceptibility to AI. I
> offered them the take-home midterm from my advanced syntax class, a complex
> problem set using examples I had created from my personal knowledge of the
> Nepali language. The data featured SOV order, postpositions, case-marking
> suffixes, optionality of core arguments, tense-based split ergativity,
> dative subjects, and differential object marking. I was confident that
> AI would have little chance of finding and describing all these patterns.
>
> On 5/28/2025, they gave the assignment to *Gemini** 2.5 Pro Preview *and
> it not only identified most of the relevant patterns and successfully
> answered my descriptive questions, it also generated a strong essay about
> the relevance of morphological vs. syntactic subject properties in the
> data. This essay correctly synthesized the relevant patterns in the
> assigned data, but it would have been suspicious to me because it also drew
> on theoretical perspectives (presumably from scraping the internet) that I
> deliberately did not include in the class. So even though it was quite
> high-level work, I would certainly have called in the student to ask where
> they had picked up these out-of-class ideas, after which I suspect the
> truth would have come out, since the student would have been unlikely to
> have the capacity to discuss the theoretical literature and why they had
> chosen to use these concepts instead of the ones I taught in class.
>
> When I expressed my surprise at the success of AI at solving this problem,
> the testers told me that over the last two years, AI has taken massive
> leaps forward in sophistication. They added: "These models are typically
> good at following instructions, and they are trained in a large variety of
> languages and linguistics-related texts. As these models have considerable
> data in their training sets (or access to it on the internet) and entirely
> language-centric, they’re likely to do a reasonable if not very
> competent job." and "Language is the specialty of LLMs. There is a lot of
> text out there likely scraped online and fed to these models. They can
> speak tens of languages, and they know a lot about linguistics. As
> non-experts, we cannot be certain that these answers are correct, although
> they seem so at first glance. We don’t doubt the model’s capabilities in
> this field."
>
> Cheers!
> Spike
>
> *From: *Lingtyp <lingtyp-bounces at listserv.linguistlist.org> on behalf of
> Alexander Coupe via Lingtyp <lingtyp at listserv.linguistlist.org>
> *Date: *Thursday, November 6, 2025 at 7:27 PM
> *To: *Juergen Bohnemeyer <jb77 at buffalo.edu>, Mark Post <
> mark.post at sydney.edu.au>, typology list <lingtyp at listserv.linguistlist.org
> >
> *Subject: *Re: [Lingtyp] "AI" and linguistics problem sets
>
> This message originated outside the UO email ecosystem.
> Use caution with links and attachments. Learn more about this *email
> warning tag
> <https://service.uoregon.edu/TDClient/2030/Portal/KB/ArticleDet?ID=141098>*
> .
> Report Suspicious
> <https://us-phishalarm-ewt.proofpoint.com/EWT/v1/C5qS4YX3!OymHZdAFSneBnUakM_O6Cg_4Bzz61Ui1jyPxXoMRTCoE94kD55qqwRvBZQ2oQ9Q1oRPooC5CgOy7mct_yAnqYjOz0q-jzpRmUlXZhWj01MIxdbgKHA$>
>
> Dear Mark and Juergen,
>
>
> A while ago when I was teaching an undergraduate morphology & syntax
> course I had the same concerns about students relying on AI to solve
> problem sets, so I tested ChatGPT (probably v. 3.5) on some fairly obscure
> data prior to setting assignments. The first task was a grammatical sketch
> based on ~two dozen sentences in Nagamese with English translations. While
> it did quite well with identifying word classes, tense marking, and other
> details of morphology, it struggled to make sense of the postpositional
> case markers (I had included example sentences of differential marking of P
> arguments in the data set). Nevertheless, it would have gotten through with
> a pass. I then tested it on some Dyirbal data with sentences demonstrating
> the split alignment system in the case marking/pronominals. This time it
> did extremely poorly and would have earned an F for its attempt. Naturally
> I shared the findings with my students 😊
>
>
> This suggests that if there is language data available that a LLM can
> access to learn, then it is risky to use a data set of that or a
> typologically similar language for assessment. At the stage of ChatGPT 3.5
> it seemed that it hadn’t had much exposure to head-final languages, and
> that may explain its inability to identify postpositional case markers. But
> this may change in the future, and its performance might have already
> improved vastly.
>
>
> Alec
> --
> Assoc. Prof. Alexander R. Coupe, Ph.D. | Associate Chair (Research) | School
> of Humanities | Nanyang Technological University
> 48 Nanyang Avenue, SHHK-03-84D, Singapore 639818
> Tel: +65 6904 2072 GMT+8h | Email: *arcoupe at ntu.edu.sg
> <arcoupe at ntu.edu.sg>*
> Academia.edu: *https://nanyang.academia.edu/AlexanderCoupe
> <https://urldefense.com/v3/__https://nanyang.academia.edu/AlexanderCoupe__;!!C5qS4YX3!AYlp9c-DTaVHsnTMRlYSmuMzxsqa-fSpOLYlPhat7VpZOgroj3MWgyzR-PyRCdxbnCdyYVW0qixHRA80tDIIYZckE-6L$>*
>
> ORCID ID: *https://orcid.org/0000-0003-1979-2370
> <https://urldefense.com/v3/__https://orcid.org/0000-0003-1979-2370__;!!C5qS4YX3!AYlp9c-DTaVHsnTMRlYSmuMzxsqa-fSpOLYlPhat7VpZOgroj3MWgyzR-PyRCdxbnCdyYVW0qixHRA80tDIIYf61tY-S$>*
> Webpage: *https://blogs.ntu.edu.sg/arcoupe/
> <https://urldefense.com/v3/__https://blogs.ntu.edu.sg/arcoupe/__;!!C5qS4YX3!AYlp9c-DTaVHsnTMRlYSmuMzxsqa-fSpOLYlPhat7VpZOgroj3MWgyzR-PyRCdxbnCdyYVW0qixHRA80tDIIYdHSOv5c$>*
>
>
>
>
>
>
> *From: *Lingtyp <lingtyp-bounces at listserv.linguistlist.org> on behalf of "
> lingtyp at listserv.linguistlist.org" <lingtyp at listserv.linguistlist.org>
> *Reply to: *Juergen Bohnemeyer <jb77 at buffalo.edu>
> *Date: *Friday, 7 November 2025 at 1:09 AM
> *To: *Mark Post <mark.post at sydney.edu.au>, "
> lingtyp at listserv.linguistlist.org" <lingtyp at listserv.linguistlist.org>
> *Subject: *Re: [Lingtyp] "AI" and linguistics problem sets
>
>
>
> *[Alert: Non-NTU Email] Be cautious before clicking any link or
> attachment.*
> Dear Mark — I’m actually surprised to hear that an AI bot is able to
> adequately solve your problem sets. My assumption, based on my own very
> limited experience with ChatGPT, has been that LMMs would perform so poorly
> at linguistic analysis that the results would dissuade students from trying
> again in the future. Would it be possible at all to share more details with
> us?
>
>
> (One recommendation I have, which I however haven’t actually tried out, is
> to put a watermark of sorts in your assignments, in the form of a factual
> detail about some lesser-studied language. Even though such engines are of
> course quite capable of information retrieval, their very nature seems to
> predispose them toward predicting the answer rather than to looking it up.
> With the results being likely straightforwardly false.)
>
>
> Best — Juergen
>
>
>
>
> Juergen Bohnemeyer (He/Him)
> Professor, Department of Linguistics
> University at Buffalo
>
> Office: 642 Baldy Hall, UB North Campus
> Mailing address: 609 Baldy Hall, Buffalo, NY 14260
> Phone: (716) 645 0127
> Fax: (716) 645 3825
> Email: *jb77 at buffalo.edu <jb77 at buffalo.edu>*
> Web: *http://www.acsu.buffalo.edu/~jb77/
> <https://urldefense.com/v3/__http://www.acsu.buffalo.edu/*jb77/__;fg!!C5qS4YX3!AYlp9c-DTaVHsnTMRlYSmuMzxsqa-fSpOLYlPhat7VpZOgroj3MWgyzR-PyRCdxbnCdyYVW0qixHRA80tDIIYQ1qRBWH$>*
>
>
> Office hours Tu/Th 3:30-4:30pm in 642 Baldy or via Zoom (Meeting ID 585
> 520 2411; Passcode Hoorheh)
>
> There’s A Crack In Everything - That’s How The Light Gets In
> (Leonard Cohen)
> --
>
>
>
> *From: *Lingtyp <lingtyp-bounces at listserv.linguistlist.org> on behalf of
> Mark Post via Lingtyp <lingtyp at listserv.linguistlist.org>
> *Date: *Tuesday, November 4, 2025 at 18:27
> *To: *typology list <lingtyp at listserv.linguistlist.org>
> *Subject: *[Lingtyp] "AI" and linguistics problem sets
> Dear Listmembers,
>
>
> I trust that most lingtyp subscribers will have engaged with “problem
> sets” of the type found in Language Files, Describing Morphosyntax, and my
> personal favourite oldie-but-goodie the Source Book for Linguistics. Since
> the advent of ChatGPT, I’ve been migrating away from these (and even
> edited/obscured versions of them) for assessments, and relying more and
> more on private/unpublished data sets, mostly from languages with lots of
> complex morphology and less familiar category types, that LLMs seemed to
> have a much harder time with. This was not an ideal situation for many
> reasons, not least of which being that these were not the only types of
> languages students should get practice working with. But the problem really
> came to a head this year, when I found that perhaps most off-the-shelf LLMs
> were now able to solve almost all of my go-to problem sets to an at least
> reasonable degree, even after I obscured much of the data.
>
>
> Leaving aside issues around how LLMs work, what role(s) they can or should
> (not) play in linguistic research, etc., I’d like to ask if any listmembers
> would be willing to share their experiences, advice, etc., specifically in
> the area of student assessment in the teaching of linguistic data analysis,
> and in particular morphosyntax, in the unfolding AI-saturated environment.
> Is the “problem set” method of teaching distributional analysis
> irretrievably lost? Can it still be employed, and if so how? Are there
> different/better ways of teaching more or less the same skills?
>
>
> Note that I would really like to avoid doomsdayisms if possible here (“the
> skills traditionally taught to linguists have already been made obsolete by
> AIs, such that there’s no point in teaching them anymore” - an argument
> with which I am all-too-familiar), and focus, if possible, on *how* it is
> possible to assess/evaluate students’ performance *under the assumption* that
> there is at least some value in teaching at least some human beings how to
> do a distributional analysis “by hand” - such that they are actually able,
> for example, to evaluate a machine’s performance in analysing a
> new/unfamiliar data set, and under the further assumption that
> assessment/evaluation of student performance in at least many institutions
> will continue to follow existing models.
>
>
> Many thanks in advance!
> Mark
>
>
> ------------------------------
>
> CONFIDENTIALITY: This email is intended solely for the person(s) named and
> may be confidential and/or privileged. If you are not the intended
> recipient, please delete it, notify us and do not copy, use, or disclose
> its contents.
> Towards a sustainable earth: Print only when necessary. Thank you.
> _______________________________________________
> Lingtyp mailing list
> Lingtyp at listserv.linguistlist.org
> https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/lingtyp
>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> Virus-free.www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> _______________________________________________
> Lingtyp mailing list
> Lingtyp at listserv.linguistlist.org
> https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/lingtyp
>
>
>
> _______________________________________________
> Lingtyp mailing list
> Lingtyp at listserv.linguistlist.org
> https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/lingtyp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20251107/bb381727/attachment.htm>
More information about the Lingtyp
mailing list