[Lingtyp] "AI" and linguistics problem sets (Mark Post)

JOO Ian joo at res.otaru-uc.ac.jp
Tue Nov 11 23:45:47 UTC 2025


Dear Hannah,

It’s becoming common for students to record the whole lectures on their phone so that the AI can learn what was done in class and do the assignments accordingly. So there’s already a way around that…

I would just like to point out that the rampant cheating in take-home assignments is nothing new. Other than simply copy-pasting whatever is on the internet, you can simply hire someone to do it for you, which used to be a bigger business than most people think. (Not to mention that the existence of such services favored students who could afford it) The only difference now is that it has become much easier and more visible, so now we’re feeling like this is something vastly new, but in fact it’s less about the advent of new technologies and more about the inherent problem of assigning take-home assignments and hoping everyone to do it genuinely on their own. Many people mention the pedagogical ineffectiveness of closed-book exams but I don’t see how take-home assignments are a better alternative unless we live in an ideal world of honesty.

On a more philosophical scale, I personally think that we think we need to place less importance on student evaluation itself which imho is just an indicator of student participation. The way I grade exams and presentations is that if they showed up in classes and paid attention, they will get a good grade. I don’t consider my courses to be a zero-sum survival of the fittest where the best performer wins the gold medal and the losers fail, nor do I think universities are a place for that. So my small suggestion is that if we saw evaluations as less of a competition and more of an attendance certificate, we will be less burdened with having to judge who did better than who and focus on transmitting and sharing knowledge, which is the point of education in my opinion. But of course, I understand everyone’s pedagogical philosophy differs.

From Otaru,
Ian

- - - - - - - - - - - - - - - - - - - - - - -
朱 易安
JOO, IAN
准教授
Associate Professor
小樽商科大学
Otaru University of Commerce

🌐 ianjoo.github.io<http://ianjoo.github.io/>
- - - - - - - - - - - - - - - - - - - - - - -

보낸 사람: Lingtyp <lingtyp-bounces at listserv.linguistlist.org>이(가) 다음 사람 대신 보냄: Hannah Sarvasy via Lingtyp <lingtyp at listserv.linguistlist.org>
날짜: 수요일, 2025년 11월 12일 05:33
받는 사람: lingtyp at listserv.linguistlist.org <lingtyp at listserv.linguistlist.org>
주제: Re: [Lingtyp] "AI" and linguistics problem sets (Mark Post)
Dear Mark and all,

One thing I have tried, but which still needs honing, is to create hybrid problem sets/assignments, with one component that draws vaguely on ‘what we did in class’ and then the new written data (‘Analyze the attached dataset in the same way we did this in class’; ‘Use the conceptual methods from class to explain…’). This way, at least, the students have to attend/remember/formulate what was done in class themselves to be able to feed the whole thing to their AI ‘helpers’.

For this to be useful, you can’t have a set of slides with all key terms, definitions, answers, methods, etc. spelled out; I often have blanks in slides that students need to fill in themselves (sort of their notes template).

If they attended class, but understood and did nothing, then (ideally) they won’t be able to do the assignment well when they go home and feed it to AI.

Best,
Hannah

Senior Researcher
The MARCS Institute for Brain, Behaviour and Development
Western Sydney University

https://www.westernsydney.edu.au/marcs/about/our_people/researchers/dr_hannah_sarvasy

http://dkb.research.pdx.edu


*********************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20251111/53172173/attachment.htm>


More information about the Lingtyp mailing list