The Chinese Diplomat's "the" (2)

Rob Malouf rmalouf at mail.sdsu.edu
Mon Aug 30 17:08:36 UTC 2004


On Mon, 2004-08-30 at 09:01, Salinas17 at aol.com wrote:
> In a message dated 8/30/04 11:22:51 AM, rmalouf at mail.sdsu.edu writes:
> << So, yeah, if he'd ever wanted to tell a valet to "please get a car", the
> system would have inserted an unwanted "the".  Fortunately, hardly anyone ever
> does that,
> so the problem doesn't come up very often. >>
>
> "...get a car."  It is what I say all the time in reference to rental cars at
> the airport.  And guys like Tony Soprano might say it with regard to the cars
> they want gotten.  You're working with a limited context.  In any case, the
> actual odds are extra-linguistic.

Why draw a distinction between linguistic and extra-linguistic factors?
I thought we were functionalists here! :-)

As I said, it's easy to construct examples which confound a system like
this.  The striking thing is that such examples are fairly rare in
actual language use.  A very simple program is able to guess the right
article for 85% of the common nouns from a sample of the Wall Street
Journal.  Of the remaining 15%, some of the articles generated by the
system would work just as well as the original one in the text, so the
actual rate of "wrong" predictions is somewhat less than 15%. And, of
the remaining errors, many would be resolved correctly if we just had a
larger reference corpus.

As a linguist, I think the fact that such an obviously inadequate system
performs as well as it does is interesting.  Not because it gives us a
plausible model of human language processing, but because it gives an
empirical measure of just how rare the truly hard cases are.

> <<It's hard for me to imagine anything less "structuralist" than an
> instance-based model like this one. The system produces an article for a sequence like
> "please get ___ car"  by searching a reference corpus for similar patterns.>>
>
> It is completely structural in how it gets to output.

How so?  There's no grammar or grammaticality, no rules or categories,
no notion of contrastive or complementary distribution.  There is a
gradient measure of sequence similarity, which I guess is a bit like the
structuralist idea of an opposition, but it's not one I would expect
Saussure or Bloomfield to endorse. True, the task the system was
evaluated on is structuralistish, but that's only because it's easy to
measure the results of, and since it's at least as hard as the task we
really care about (finding an article which does the right thing in a
given context), it gives us an upper bound on the error rate. [Actually,
to be honest, if you read the fine print, some notion of category does
get smuggled in by the back door in this particular system, but that's
not a necessary feature of a memory-based model.]

> But the fact that you've found
> predictability in the patterns of speech doesn't necessarily provide an
> explanation of those patterns -- other than perhaps we are in the habit of talking
> about the same things for the same reasons in the same ways from day to day.

What more explanation do you need? ;-)
--
Rob Malouf <rmalouf at mail.sdsu.edu>
Department of Linguistics and Oriental Languages
San Diego State University



More information about the Funknet mailing list