Rich, thank you for posting this pointer! I have not yet had a chance to read the paper, but on a quick scan, I notice that they do not reference<br><br><div style="margin-left: 40px;">Shyam <span class="nfakPe">Kapur</span>
, "Computational Learning of Languages", Cornell dissertation, 1991. <a href="http://portal.acm.org/citation.cfm?id=866568" target="_blank">http://portal.acm.org/citation.cfm?id=866568
</a>. Postscript and PDF available at <a href="http://ecommons.library.cornell.edu/handle/1813/7074" target="_blank">http://ecommons.library.cornell.edu/handle/1813/7074</a>.
<br></div><br>I suspect that anyone interested in the Chater and Vitanyi paper will also find this of interest, particularly the results of Chapter 5, where <span class="nfakPe">Kapur</span>
writes, "... in accordance to a suggestion due to Gold (1967), maybe we
can learn more families if we insist on convergence on most (instead of
all) texts [i.e. sequences of sentences presented as positive examples to the
learner -PSR]". His work shows that with this convergence
criterion, it is possible to obtain "a uniform learning algorithm that
works for <i>every </i>family of languages" [my emphasis], subject to
stochastic assumptions about the input that, if I understood/recall
correctly, avoid problems with the sorts of pathological texts the Gold's proof relied on.<br><br>Happy holidays,<br><br> Philip<br>`<br><br><br><div class="gmail_quote">On Dec 27, 2007 12:41 PM, Rich Cooper <<a href="mailto:Rich@englishlogickernel.com">
Rich@englishlogickernel.com</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Not everyone seems to believe that the "poverty of the stimulus" is a valid
<br>argument.<br><br>Here is a mathematically supported paper that provides a deep treatment of<br>learning language<br>"Ideal Learning of Natural Language: Positive Results from Learning about<br>Positive Evidence"
<br>by Nick Chater at Univ Coll London:<br><a href="http://eprints.pascal-network.org/archive/00002798/01/jmp06.pdf" target="_blank">http://eprints.pascal-network.org/archive/00002798/01/jmp06.pdf</a><br><br>His claim is that an ideal learner, using Kolmogorov complexity methods, can
<br>provide both the positive and the negative learning needed for an ideal<br>learner, even though only given positive evidence. The absence of evidence<br>for certain constructions is treated as evidence that those constructions
<br>are ungrammatical.<br><br>Comments appreciated.<br><br>Sincerely,<br>Rich Cooper<br><a href="http://www.EnglishLogicKernel.com" target="_blank">http://www.EnglishLogicKernel.com</a><br><br><br><br><br><br>_______________________________________________
<br>Corpora mailing list<br><a href="mailto:Corpora@uib.no">Corpora@uib.no</a><br><a href="http://mailman.uib.no/listinfo/corpora" target="_blank">http://mailman.uib.no/listinfo/corpora</a><br><br></blockquote></div><br>