[Corpora-List] QM analogy and grammatical incompleteness

Rob Freeman lists at chaoticlanguage.com
Tue Dec 20 00:50:53 UTC 2005


On Monday 19 December 2005 16:00, Dominic Widdows wrote:
>
> To your more general points about language. I think that the goal of
> complete, predictive knowledge of any complex language system is bound
> to lead to disappointment. But I don't think that this invalidates the
> goal of getting as much of it right as possible! We know that a
> part-of-speech tagger trained on texts in one domain might not do so
> well in other domains, but this doesn't at all meant that the system
> isn't very valuable. We have to get better at the adaptive part as
> well, and there has been plenty of recent and fruitful work to address
> this part of the language processing challenge. New fields have to find
> their balance between deduction and induction, and it is a shame if we
> spurn one another's work too readily.

You missed my point a little, Dominic. I wasn't saying that predictive 
knowledge of language is impossible, I was just saying that a complete 
description of _grammar_ is impossible (due to a kind of uncertainty 
principle applying to the simultaneous characterization of text in terms of 
different grammatical abstractions, if those abstractions are defined 
distributionally.)

As you point out QM is very useful for making predictions about physical 
systems, and it does that in spite of its cheeky claims about fundamental 
limits on knowability.

POS tags are fine, and Newton's laws are fine, but it is not respect to ignore 
their limitations.

Let us have a clear statement of their limitations, if limited they are.

In short, do you believe there is a limitation on knowledge analogous to the 
Uncertainty Principle of QM, which applies to the simultaneous 
characterization of text in terms of grammatical qualities (defined 
distributionally)?

-Rob



More information about the Corpora mailing list