[Corpora-List] 155 *billion* (155, 000, 000, 000) word corpus of American English

Mark Davies Mark_Davies at byu.edu
Thu May 12 15:15:47 UTC 2011


>> Is the corpus itself or part of it available for downloading? It would be more useful if we could process the raw text for our own purpose rather than accessing it from a web interface.

As mentioned previously, the underlying n-grams data is freely available from Google at http://ngrams.googlelabs.com/datasets (see http://creativecommons.org/licenses/by/3.0/ re. licensing).

One warning, though -- just like the Google web n-grams, these n-grams are pretty big -- 12-15 billion rows of data for both the 3-grams and 4-grams, and 30+ billion rows for the 5-grams. Even on a relatively powerful machine (twelve 15k SAS HDs in RAID 5, 16 CPU's, 24 GB RAM, SQL Server 2008 R2 Enterprise) it took quite a while to process these.

Best,

Mark D.

============================================
Mark Davies
Professor of (Corpus) Linguistics
Brigham Young University
(phone) 801-422-9168 / (fax) 801-422-0906
Web: http://davies-linguistics.byu.edu
 
** Corpus design and use // Linguistic databases **
** Historical linguistics // Language variation **
** English, Spanish, and Portuguese **
============================================


From: Lushan Han [lushan1 at umbc.edu]
Sent: Thursday, May 12, 2011 9:04 AM
To: Mark Davies
Cc: corpora at uib.no
Subject: Re: [Corpora-List] 155 *billion* (155, 000, 000, 000) word corpus of American English


Hi Mark,

Is the corpus itself or part of it available for downloading? It would be more useful if we could process the raw text for our own purpose rather than accessing it from a web interface.

Best regards,
 LushanHan


On Thu, May 12, 2011 at 10:52 AM, Mark Davies <Mark_Davies at byu.edu> wrote:

We’re pleased to announce a new corpus -- the Google Books (American English) corpus  (http://googlebooks.byu.edu/).

This corpus is based on the American English portion of the Google Books data (see http://ngrams.googlelabs.com and especially http://ngrams.googlelabs.com/datasets). It contains 155 *billion* words  (155,000,000,000) in more than 1.3 million books from the 1810s-2000s (including 62 billion words from just 1980-2009).

The corpus has most of the functionality of the other corpora from http://corpus.byu.edu (e.g. COCA, COHA, and our interface to the BNC), including: searching by part of speech, wildcards, and lemma (and thus advanced syntactic searches), synonyms, collocate searches, frequency by decade (tables listing each individual string, or charts for total frequency), comparisons of two historical periods (e.g. collocates of "women" or "music" in the 1800s and the 1900s), and more.

This American English corpus is just one of seven Google Books-based corpora that we hope to create in the next year or two (contingent on funding, which we are applying for in June 2011). If funded, the other corpora will include British English, English from the 1500s-1700s, and corpora of Spanish, French, and German (see the listing at http://ngrams.googlelabs.com/datasets).  Each of these corpora will be based on at least 50 billion words of data, and they should represent a nice addition to existing resources.

The Google Books (American English) corpus is freely-available at http://googlebooks.byu.edu, and we hope that it is of value to you in your research and teaching.

============================================
Mark Davies
Professor of (Corpus) Linguistics
Brigham Young University
(phone) 801-422-9168 / (fax) 801-422-0906
Web: http://davies-linguistics.byu.edu

** Corpus design and use // Linguistic databases **
** Historical linguistics // Language variation **
** English, Spanish, and Portuguese **
============================================
_______________________________________________
UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora
_______________________________________________
UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora



More information about the Corpora mailing list