[Lingtyp] Future of AI, Simulating Reality...

randylapolla randylapolla at protonmail.com
Fri Aug 8 14:13:20 UTC 2025


Dear Stella,
I’m disappointed you didn’t understand anything in the slides I sent, and so missed my point entirely.
Have you not heard of the Tagalog language, and just call anything that you don’t understand or seems odd to you “Chinese”?
The point of the Tagalog analysis example wasn’t that the analysis was wrong, even though it was, but that the algorithm changed its answers to try to appease the person interacting with the algorithm. The algorithm is trained to do this, so that the person will keep using the algorithm, but the responses got stranger and stranger as the process went on. So the algorithm is not a reliable source of information or analysis.

Randy

On Fri, Aug 8, 2025 at 9:26 PM, Stela MANOVA <[manova.stela at gmail.com](mailto:On Fri, Aug 8, 2025 at 9:26 PM, Stela MANOVA <<a href=)> wrote:

> Dear Randy,
>
> Hassabis is trying to crack the code of the universe. You’re countering that with two Chinese sentences — one in active voice and one in passive voice. That’s not just a different scale, it’s a different galaxy.
>
> And if linguists know ChatGPT works without semantics, why keep testing its semantic competence? Hassabis talks about simulating reality; you talk about communication. Again — two entirely different things.
>
> Sometimes our field slips into the “nobody can tell me something significant about language” stance. But if we want to think like scientists, the first step is simple: watch the interview. Then decide whether there’s something worth learning — or not.
>
> Best,
>
> Stela
>
>> On 08.08.2025, at 11:58, randylapolla via Lingtyp <lingtyp at listserv.linguistlist.org> wrote:
>>
>> Hi All,
>> The problem with AI is exactly that the people doing it do not understand communication. LLM’s are simply statistical probability machines, called Artificial Narrow Intelligence. They work on inductive inference. But human cognition and communication relies mainly on abductive inference, and that is what computers will need to do for Artificial General Intelligence to be achieved. As yet they cannot achieve that, and most people in the field don’t even know that it is what is necessary. I’ll attach some slides from a talk I gave last November that contrasts ANI and AGI and talks about the relevance of different types of inference. At the same time, both ANI and AGI are dangerous, and I discuss that aspect as well.
>>
>> Randy
>>
>>> On 7 Aug 2025, at 4:11 PM, Stela MANOVA via Lingtyp <lingtyp at listserv.linguistlist.org> wrote:
>>>
>>> Dear Colleagues,
>>>
>>> I would like to draw your attention to an interview with Demis Hassabis, CEO of Google DeepMind and recipient of the 2024 Nobel Prize in Chemistry. The interview is featured on Lex Fridman’s podcast and is titled “Future of AI, Simulating Reality, Physics and Video Games”, available here:https://www.youtube.com/watch?v=-HzgcbRXUK8&t=860s.
>>>
>>> Although the conversation does not focus specifically on language, it offers valuable insights into large language models (LLMs), particularly how they can model language without possessing linguistic competence—relying instead on patterns derived from observation.
>>>
>>> Notably, many computer scientists consider LLMs to represent the first and most accessible step toward Artificial General Intelligence (AGI).
>>>
>>> Best regards,
>>> Stela Manova, PI Gauss:AI Global
>>
>> <smime.p7s> <LaPolla-ANI vs AGI.pptx>_______________________________________________
>> Lingtyp mailing list
>> Lingtyp at listserv.linguistlist.org
>> https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/lingtyp
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20250808/f0e06942/attachment.htm>


More information about the Lingtyp mailing list