[Lingtyp] On AI, Language, and (In)Human Thinking
Stela MANOVA
manova.stela at gmail.com
Sun Nov 9 21:13:32 UTC 2025
Dear colleagues,
It turned out that the topic of LLMs is exactly like the topic of language — everybody feels competent, irrespective of their qualifications. In what follows, I would like to address some misinformation that appeared in relation to AI in recent messages on this list: that LLMs are designed to generate costs and therefore end virtually every answer with a question, and that they are “dangerous” because they operate in an entirely inhuman way, based only on form.
As many of you know, my work focuses on ChatGPT, so I will use it in my examples:
ChatGPT knows many things but cannot start a conversation. It needs prompts — i.e., contextual anchors — to select the next token. This is why it often ends answers with a question: not because it “thinks of money,” but because it seeks additional input. The larger the input, the better the answer. Note that the fact that AI is reactive, not proactive, places the human in control of the machine.
LLMs are not mere text collections but a triumph of human intelligence. You do not believe that if you have the whole Internet in text format, this huge amount of text will start speaking like a human by itself. Behind LLMs lies immense conceptual and mathematical work — all done by mathematically gifted humans. I describe the mindset of such people in my paper https://ling.auf.net/lingbuzz/008998 <https://ling.auf.net/lingbuzz/008998>, including how the creators of ChatGPT arrived at the idea of representing language as a linear sequence of tokens. The idea to put all languages in a shared representational space is equally remarkable: that way they get everything (grammar, semantics, typology, etc.) for free, so to speak; data self-classify, and they can work even with pieces of data and in many languages simultaneously. Compare this with the linguistic approach: each language is described separately; we compare data only when (complete) language descriptions are ready, and the transfer of classificational features from language to language is not always obvious or smooth.
As those of you who have read the paper mentioned above know, I was educated as a math-gifted student for ten years. What I do not mention in the paper is that my brain was trained for mathematical thinking at least five hours a day, every day except Sundays — yes, for ten years — to form the necessary neural connections. And yes, I learned university mathematics in my teens. A mathematically gifted child often hears three things:
i) All problems are already solved in the real world; you only need to find the right analogy.
ii) It is the belly that knows, not the head. Listen to your belly!
iii) All problems have more than one solution.
Allow me now to propose a short experiment related to the alleged “inhuman” way LLMs treat language — namely, by separating meaning and form. (By “separation” I mean only that the two aspects can be processed or represented independently for a while. By the mediation of the human brain, they both then appear meaningful.) The goal of the experiment is to make you experience the way of thinking of a math-gifted person and to demonstrate that there is an analogy to the ChatGPT approach to language in the real world.
So, the task: Can you find an activity in the real world where form and meaning are separated, yet each can be reconstructed from the other? If you can, the separation of meaning and form is not alien to the human brain.
Following the spirit of mathematical training — where there is always a time limit, including a period for full concentration (with a “no restroom” rule) — I propose the following:
It is now November 9, 22:00 CET, and I am giving you two days (until November 11, 22:00 CET) to find the activity described in the task (it could be from any sphere of human life). During this period, I kindly ask that no messages be posted in this thread, so that we can all focus on the task (the “no restroom” rule does not apply).
After the time limit elapses, we will collect our findings and discuss them. I have already solved the task (and mentioned the solution in an exchange with a well-known neurolinguist — I hope this does not spoil the experiment). On the third day, I will share my answer and look forward to hearing yours.
I hope this demonstration will bring you closer to the thinking behind ChatGPT and show that there is nothing “dangerous” in the model — only a new, fascinating way of representing language grounded in the real world. (Think of the shared representational space as the Earth where all humans live.)
Best wishes,
Stela / Gauss:AI Global
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20251109/54999d9f/attachment.htm>
More information about the Lingtyp
mailing list