The Chinese Diplomat's "the"
Ellen F. Prince
ellen at central.cis.upenn.edu
Mon Aug 30 16:04:14 UTC 2004
R. Malouf writes:
>On Aug 30, 2004, at 7:34 AM, Salinas17 at aol.com wrote:
>> In a message dated 8/29/04 7:14:03 PM, rmalouf at mail.sdsu.edu writes:
>> << At any rate, the performance of the best models is getting close to
>> of humans at guessing which article will be used in a given context. >>
>> There's an irony to why one sees such adherence to structuralist
>> criteria on
>> the "functional" linguistics list. In most situations, of course, a
>> model cannot possibly predict the use of "the" versus "a" unless it
>> also reads
>It's hard for me to imagine anything less "structuralist" than an
>instance-based model like this one. The system produces an article for
>a sequence like "please get ___ car" by searching a reference corpus
>for similar patterns. If it finds sequences like "please get the car"
>more often than "please get a car" or "please get car", it produces a
>The amazing thing is that this actually works! If we take a corpus,
>strip out all the articles, and use the system to try to recover them,
>it's right almost 85% of the time. This can be further improved
>somewhat by providing the system with an ontology of noun meanings (so
>it can draw generalizations about words which don't occur in the
>reference corpus but have very similar meanings to words which do).
>No, it's never going to be right 100% of the time, at least until we
>can read minds, but in most situations, very simple information about
>the context is all that's needed.
This may be an attractive solution for producing software for the
market -- but it is simply hilarious as any sort of model of how
humans use language.
Imagine two company robots flying to a remote destination together. One has
the kind of software you are describing; the other has human-like
competence in the use of articles. After collecting their baggage, the
one with your (kind of) software says to the other one, 'I've just realized
that we need the car, please.' Being an obedient robot and understanding
the request as a human would, the requestee boards the next flight back
home, since the only thing s/he/it can infer from _the car_ in this
context is their company car...
The fact that people typically drive their own car, which is Hearer-known
or Inferrable and hence typically definite, more often than a rental car,
which can be Hearer-new and hence typically indefinite, is profoundly
irrelevant to human language processing/competence -- even if it'll get
the software developer safely thru a demo (almost) 85 out of 100 times...
And, by the way, to deal with linguistic reference, we only have to
'read minds' as well as the average speaker does -- i.e. not at all.
What we need is a large and relevant knowledge-base and a system of
plausible reasoning, both needed anyway for other aspects of AI, as
well as some form-function correspondences for each language. IOW,
we need what languages users have.
More information about the Funknet