<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 TRANSITIONAL//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; CHARSET=UTF-8">
<META NAME="GENERATOR" CONTENT="GtkHTML/3.18.3">
</HEAD>
<BODY>
My view is somewhat different. It's very easy to extract every fifth word from a corpus and observe differences in the resulting lists. The problem in this case is that it does little to tell us how reliable our original list is. This approach will give some insights into the Zipfian distribution model and its relationship to frequency lists. Back in 1970s there were studies of this sort (I don't have a reference at hand, will try checking a bit later, this might be a reference mentioned in Sinclair's 1987 COBUILD collection). Among other things such studies predicted that a Brown-like corpus of one million words was good enough for a reliable frequency list of the top 2000 words (figures are rough, I'll have to check the reference). This was based on the notion of confidence intervals (giving 99% confidence that words in the list are the same as words in the entire population of texts). However, this result cannot mean that the frequency list is in any way reliable. Have a look at the list from the Brown corpus (lemmatised by Treetagger):
<PRE>
1988 52 arthur
1989 52 stranger
1990 52 bag
1991 52 proud
1992 52 administrative
1993 52 los
1994 52 possess
1995 52 scientist
1996 52 liberty
1997 52 surround
1998 52 critic
1999 52 grin
2000 52 disappear
</PRE>
The problem obviously comes from the composition of the corpus: American texts in, 'Los Angeles/Alamos' out, fiction in, 'grin' out. Thinning the original corpus might replace 'grin' with 'bark', but it shouldn't change its composition on average.<BR>
<BR>
In my view a more qualitative approach can yield more revealing results: we take corpora with different compositions and find differences in distribution between them. What is the difference between the list of newswires against the list of blogs against fiction, what is the difference of ukWac against (a hypothetical) usWac, etc. Any thoughts on this? <BR>
<BR>
Serge<BR>
<TABLE CELLSPACING="0" CELLPADDING="0" WIDTH="100%">
<TR>
<TD>
<BR>
</TD>
</TR>
</TABLE>
<BR>
On Thu, 2009-04-02 at 09:48 +0100, Adam Kilgarriff wrote:<BR>
<BLOCKQUOTE TYPE=CITE>
Mark,<BR>
<BR>
Nice question!<BR>
<BR>
I'm pretty confident it hasn't been seriously studied yet. A critical factor will relate to sample sizes (eg text lengths) and whether any action has been taken to modify (downwards) frequencies of words occurring heavily in a small number of texts. (In the Sketch Engine we use ARF 'Average Reduced Frequency' for this, see also Stefan Gries's recent survey of dispersion measures.)<BR>
<BR>
There are two ways to look at the question - empirical, and alanlytical. My hunch is that the analytical one - developing a (Zipfian) probability model for the corpus and exploring its consequences - will be the more enlightening (if tougher!): empirical approaches are easy to do and will give lots of data but unless they are compared to the predictions of a theory/model, they won't lead anywhere.<BR>
<BR>
Adam <BR>
<BR>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
2009/4/1 Mark Davies <<A HREF="mailto:Mark_Davies@byu.edu">Mark_Davies@byu.edu</A>><BR>
<BLOCKQUOTE>
I'm looking for studies that have considered how corpus size affects the accuracy of word frequency listings.<BR>
<BR>
For example, suppose that one uses a 100 million word corpus and a good tagger/lemmatizer to generate a frequency listing of the top 10,000 lemmas in that corpus. If one were to then take just every fifth word or every fiftieth word in the running text of the 100 million word corpus (thus creating a 20 million or a 2 million word corpus), how much would this affect the top 10,000 lemma list? Obviously it's a function of the size of the frequency list as well -- things might not change much in terms of the top 100 lemmas in going from a 20 million word to a 100 million word corpus, whereas they would change much more for a 20,000 lemma list. But that's precisely the type of data I'm looking for.<BR>
<BR>
Thanks in advance,<BR>
<BR>
Mark Davies<BR>
<BR>
============================================<BR>
Mark Davies<BR>
Professor of (Corpus) Linguistics<BR>
Brigham Young University<BR>
(phone) 801-422-9168 / (fax) 801-422-0906<BR>
Web: <A HREF="http://davies-linguistics.byu.edu">davies-linguistics.byu.edu</A><BR>
<BR>
** Corpus design and use // Linguistic databases **<BR>
** Historical linguistics // Language variation **<BR>
** English, Spanish, and Portuguese **<BR>
============================================<BR>
<BR>
<BR>
_______________________________________________<BR>
Corpora mailing list<BR>
<A HREF="mailto:Corpora@uib.no">Corpora@uib.no</A><BR>
<A HREF="http://mailman.uib.no/listinfo/corpora">http://mailman.uib.no/listinfo/corpora</A>
</BLOCKQUOTE>
</BLOCKQUOTE>
<BLOCKQUOTE TYPE=CITE>
<BR>
<BR>
<BR>
-- <BR>
================================================<BR>
Adam Kilgarriff <A HREF="http://www.kilgarriff.co.uk">http://www.kilgarriff.co.uk</A> <BR>
Lexical Computing Ltd <A HREF="http://www.sketchengine.co.uk">http://www.sketchengine.co.uk</A><BR>
Lexicography MasterClass Ltd <A HREF="http://www.lexmasterclass.com">http://www.lexmasterclass.com</A><BR>
Universities of Leeds and Sussex <A HREF="mailto:adam@lexmasterclass.com">adam@lexmasterclass.com</A><BR>
================================================
</BLOCKQUOTE>
</BODY>
</HTML>