<br><br><div class="gmail_quote">On Sun, Nov 21, 2010 at 7:20 AM, <span dir="ltr"><<a href="mailto:amsler@cs.utexas.edu">amsler@cs.utexas.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
However, corpora were well established as the basis for lexicography in the US by the 1970s with books such as the American Heritage Word Frequency Book serving as the basis for the "American Heritage Dictionary of the English Language" (Houghton MIfflin Co, 1969) (see foreword essay of the dictonary by Henry Kucera on "Computers in Language Analysis and in Lexicography"). This of course followed his significant "Computational Analysis of Present-Day American English" (Kucera & Francis, Brown U. Press, 1967).<br>
<br>
Just out of curiosity, what were the discoveries about grammar and linguistics that have come from corpora that were not marketed in the US before 1970? Or is this just a philosophical attitude? Note: I'm not taking sides here, I just don't know what grammatical/linguistic rules came from corpora studies that linguists were ignoring in the US before 1970.<div class="im">
<br>
<br></div></blockquote><div><br></div><div>I didn't and wouldn't make the claim that there were grammatical/linguistic rules that came from corpora before 1970. Corpus builders produced reusable knowledge about how to collect controlled samples of language, how to assess and study variability and how to begin to answer questions about register and usage. For me, these are linguistic questions, even though the mainstream of generative linguistics has only recently begun to re-address them, after decades of (arguably benign) neglect. </div>
<div>But what was learnt was primarily about corpora and what they are good for, and did not particularly correspond to the concerns of the theoreticians.</div><div><br></div><div>Yorick is right to point to his work with Krotov et al. Richard Sharman found similar things in (if I recall correctly) the early 90s, where the accession rate of rules in a GPSG-ish grammar did not seem to stabilize as the number of sentences in the sample</div>
<div>grew. These studies really do bear directly, and negatively, on the claims that you can build a finite grammar for realistic language samples: the best way to attack them would be to demonstrate a more expressive grammar formalism that somehow </div>
<div>allows things to have the expectedly graceful asymptotic properties. I am surprised that few (if any) theoretical linguists have been prepared to undertake the mental retooling necessary in order to take on this challenge: success would be a very compelling demonstration of their claims for the powers of good representations.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="im">
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 11/20/2010 10:36 AM, chris brew wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
it's safe to assume that most things about corpora were discovered and<br>
carefully documented (but not necessarily marketed in the US) before 1970<br>
</blockquote>
<br>
</blockquote>
<br>
<br></div><div><div></div><div class="h5">
_______________________________________________<br>
Corpora mailing list<br>
<a href="mailto:Corpora@uib.no" target="_blank">Corpora@uib.no</a><br>
<a href="http://mailman.uib.no/listinfo/corpora" target="_blank">http://mailman.uib.no/listinfo/corpora</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Chris Brew, Ohio State University<br>