[Ads-l] Slang Sense of "Catfishing"
ADSGarson O'Toole
adsgarsonotoole at GMAIL.COM
Mon Jan 8 21:18:36 UTC 2024
Amy West said:
> Just earlier this morning I was wondering about the choice of
> "hallucinate" vs. "fabricate" for AIs making up sh*t.
Some commentators have suggested using the term "confabulate" instead
of "hallucinate" when discussing AI and LLM systems. Here is a tweet
from April of last year.
[Begin tweet info]
https://x.com/jordancurve/status/1646372327352049664
Josh Jordan @jordancurve 12:38 AM · Apr 13, 2023
Great point – in the context of describing LLM behavior, “confabulate”
is a more accurate term than “hallucinate”
[End tweet info]
[Begin Merriam-Webster excerpt]
Confabulate intransitive verb
1: to talk informally : CHAT
2: to hold a discussion : CONFER
3: to fill in gaps in memory by fabrication
[End Merriam-Webster excerpt]
An LLM (large language model) system constructs a probabilistically
plausible extrapolation of a text string. Unfortunately, the text the
LLM generates is not guaranteed to be veridical. One strategy to
reduce confabulations would require the AI system to use the generated
text to locate pertinent passages from the training corpus (and from
databases deemed reliable). The system would reread the key passages
to assess and improve the accuracy of the generated text.
Garson
On Mon, Jan 8, 2024 at 9:58 AM Ben Zimmer <bgzimmer at gmail.com> wrote:
>
> Yes, Emily Bender did make that complaint about "hallucination" in the WOTY
> voting. I quoted her along similar lines when I wrote about "hallucination"
> for my Wall Street Journal column back in April.
>
> ---
> https://www.wsj.com/articles/hallucination-when-chatbots-and-people-see-what-isnt-there-91c6c88b?st=spvdnolqjd9u5aj&reflink=desktopwebshare_permalink
> University of Washington computational linguist Emily M. Bender agrees,
> noting that the term incorrectly “suggests that the language model is
> having perceptions and experiences.” She also is concerned about “making
> light of what can be a serious mental health symptom.” While alternatives
> like “synthesized ungrounded text” might be a mouthful, they avoid the
> pitfalls of imagining that AI systems have human perceptions, hallucinatory
> or otherwise.
> ---
>
> On Mon, Jan 8, 2024 at 8:16 AM Joe Salmons <
> 000008f18d0e0c45-dmarc-request at listserv.uga.edu> wrote:
>
> > Was it Emily Bender who said at WOTY that ‘hallucinate’ makes AI seem like
> > it has consciousness or something to that effect?
> > Joe
> >
> > From: American Dialect Society <ADS-L at LISTSERV.UGA.EDU> on behalf of Amy
> > West <medievalist at W-STS.COM>
> > Date: Monday, January 8, 2024 at 08:07
> > To: ADS-L at LISTSERV.UGA.EDU <ADS-L at LISTSERV.UGA.EDU>
> > Subject: Re: Slang Sense of "Catfishing"
> > On 1/8/24 12:00 AM, ADS-L automatic digest system wrote:
> > > Date: Sun, 7 Jan 2024 12:49:27 -0500
> > > From: Jonathan Lighter<wuxxmupp2000 at GMAIL.COM>
> > > Subject: Re: Slang Sense of "Catfishing"
> > >
> > > If a machine can invent an apposite title like "Love in the Time of
> > Spam,"
> > > we may as well admit it's over.
> > >
> > > I've had even more spectacularly future-dystopic experiences with both
> > > Google Bard and Bing, but I'll spare you the details.
> > >
> > > JL
> >
> >
> > Just earlier this morning I was wondering about the choice of
> > "hallucinate" vs. "fabricate" for AIs making up sh*t.
> >
> > ---Amy West
> >
> >
> >
>
> ------------------------------------------------------------
> The American Dialect Society - http://www.americandialect.org
------------------------------------------------------------
The American Dialect Society - http://www.americandialect.org
More information about the Ads-l
mailing list