[Corpora-List] ANC, FROWN, Fuzzy Logic
Rob Freeman
lists at chaoticlanguage.com
Fri Jul 28 05:54:58 UTC 2006
On Thursday 27 July 2006 20:13, John F. Sowa wrote:
>
> Chomsky's fallacy was to take a mathematical formalism, namely
> Post production systems, and make the claim that they capture
> the fundamental nature of natural language. If he had softened
> that claim to saying they were a promising model of an important
> aspect of language, he and his colleagues could have done the
> same research, but without inciting the religious wars.
John,
Quite what you think the relationship between mathematics and science should
be, I can't tell. As long as it does not prevent you from accepting my
conclusions it is probably not worth debating here.
However, I don't think Chomsky's rules ever had a chance of bearing out their
promise, and I think we must understand the reason for this was not that he
got the relationship between theory and practice wrong, he was right to
insist that we reconcile the theory with the data (in his case the "data"
that discovery methods did not give global rules, others were ignoring this,
and have continued to do so, kudos to Chomsky.) The problem was that the
particular theory he chose to believe insisted that he attempt to make his
rules maximally complete (in terms of coverage), and thus by our theoretical
understanding guaranteed they would be maximally uncertain.
It is only with reference to the correct theory we can understand that.
Knowledge may be subjective, it may even be ultimately random, but it is not
totally without consequences. The theoretical choice Chomsky took made a
difference.
It is not the case that we now see Universal Grammar as just one theory among
equals, neither more right nor more wrong. That Chomsky and the others could
have got on together and avoided the "linguistics wars" in content, as well
as manner. Subjectivity does not mean every theory is right, it means we must
concentrate on the issue of discovering what is right when.
With a new theoretical insight that there is a trade-off involved we can now
remedy the problem. Since usually what we want is maximal certainty, we must
look for ways to buy this by choosing maximal incompleteness (in terms of
coverage), or specificity of generalization.
I am sure it is the technology for fitting a theory (a grammatical
generalization in the case of language) to its purpose which will prove most
useful, not any one generalization or another.
I'm saying this because I know you advocate a solution for the sister problem
of this grammatical incompleteness, ontological incompleteness, what you call
"knowledge soup", by simply enumerating numerous theories. I don't think this
is workable. To be useful theories will have to be far too specific. I'm sure
the only representation compact enough to make the necessary specificity of
fit possible will prove to be something close to the raw data, a corpus in
the case of language.
-Rob
More information about the Corpora
mailing list