[Corpora-List] Tokenizer for English Web Corpus
Adriano Ferraresi
a.ferraresi at gmail.com
Tue Mar 13 11:39:46 UTC 2007
Hi everybody,
I am currently embarking on a research project aiming at building a large
corpus of English by automatic crawls of the web. For this purpose I would
be interested in having some suggestions about an efficient tokenizer for
English. This should in some way take into account specific aspects of Web
writing (such as the treatment of emoticons, typos, commonly used
abbreviations, etc.). Does anyone know about a similar tool?
I will provide a resume of the answers I (hopefully!) will get.
Thank you.
Adriano Ferraresi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/corpora/attachments/20070313/7a6962e5/attachment.htm>
More information about the Corpora
mailing list