[Corpora-List] New from LDC
Linguistic Data Consortium
ldc at ldc.upenn.edu
Fri May 24 15:40:56 UTC 2013
****/N//ew//publication//s///
*- **GALE Arabic-English Parallel Aligned Treebank -- Newswire <#gale>**-**
*
*- **MADCAT Phase 2 Training Set <#madcat>** -***
------------------------------------------------------------------------
*New publications*
(1) GALE Arabic-English Parallel Aligned Treebank -- Newswire
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2013T10>
(LDC2013T10) was developed by LDC and contains 267,520 tokens of word
aligned Arabic and English parallel text with treebank annotations. This
material was used as training data in the DARPA GALE (Global Autonomous
Language Exploitation) program. Parallel aligned treebanks are treebanks
annotated with morphological and syntactic structures aligned at the
sentence level and the sub-sentence level. Such data sets are useful for
natural language processing and related fields, including automatic word
alignment system training and evaluation, transfer-rule extraction, word
sense disambiguation, translation lexicon extraction and cultural
heritage and cross-linguistic studies. With respect to machine
translation system development, parallel aligned treebanks may improve
system performance with enhanced syntactic parsers, better rules and
knowledge about language pairs and reduced word error rate.
In this release, the source Arabic data was translated into English.
Arabic and English treebank annotations were performed independently.
The parallel texts were then word aligned. The material in this corpus
corresponds to the Arabic treebanked data appearing in Arabic Treebank:
Part 3 v 3.2 (LDC2010T08
<http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2010T08>) (ATB)
and to the English treebanked data in English Translation Treebank:
An-Nahar Newswire (LDC2012T02
<http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2012T02>).
The source data consists of Arabic newswire from the Lebanese
publication An Nahar collected by LDC in 2002. All data is encoded as
UTF-8. A count of files, words, tokens and segments is below.
Language
Files
Words
Tokens
Segments
Arabic
364
182,351
267,520
7,711
Note: Word count is based on the untokenized Arabic source and token
count is based on the ATB-tokenized Arabic source.
The purpose of the GALE word alignment task was to find correspondences
between words, phrases or groups of words in a set of parallel texts.
Arabic-English word alignment annotation consisted of the following tasks:
Identifying different types of links: translated (correct or
incorrect) and not translated (correct or incorrect)
Identifying sentence segments not suitable for annotation, e.g.,
blank segments, incorrectly-segmented segments, segments with
foreign languages
Tagging unmatched words attached to other words or phrases
*
(2) MADCAT Phase 2 Training Set
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2013T09>
(LDC2013T09) contains all training data created by LDC to support Phase
2 of the DARPA MADCAT (Multilingual Automatic Document Classification
Analysis and Translation)Program. The data in this release consists of
handwritten Arabic documents, scanned at high resolution and annotated
for the physical coordinates of each line and token. Digital transcripts
and English translations of each document are also provided, with the
various content and annotation layers integrated in a single MADCAT XML
output.
The goal of the MADCAT program is to automatically convert foreign text
images into English transcripts. MADCAT Phase 2 data was collected from
Arabic source documents in three genres: newswire, weblog and newsgroup
text. Arabic speaking scribes copied documents by hand, following
specific instructions on writing style (fast, normal, careful), writing
implement (pen, pencil) and paper (lined, unlined). Prior to assignment,
source documents were processed to optimize their appearance for the
handwriting task, which resulted in some original source documents being
broken into multiple pages for handwriting. Each resulting handwritten
page was assigned to up to five independent scribes, using different
writing conditions.
The handwritten, transcribed documents were checked for quality and
completeness, then each page was scanned at a high resolution (600 dpi,
greyscale) to create a digital version of the handwritten document. The
scanned images were then annotated to indicate the physical coordinates
of each line and token. Explicit reading order was also labeled, along
with any errors produced by the scribes when copying the text. The
annotation results in GEDI XML output files (gedi.xml), which include
ground truth annotations and source transcripts
The final step was to produce a unified data format that takes multiple
data streams and generates a single MADCAT XML output file with all
required information. The resulting madcat.xml file has these distinct
components: (1) a text layer that consists of the source text,
tokenization and sentence segmentation, (2) an image layer that consist
of bounding boxes, (3) a scribe demographic layer that consists of
scribe ID and partition (train/test) and (4) a document metadata layer.
This release includes 27,814 annotation files in both GEDI XML and
MADCAT XML formats (gedi.xml and madcat.xml) along with their
corresponding scanned image files in TIFF format.
------------------------------------------------------------------------
--
--
Ilya Ahtaridis
Membership Coordinator
--------------------------------------------------------------------
Linguistic Data Consortium Phone: 1 (215) 573-1275
University of Pennsylvania Fax: 1 (215) 573-2175
3600 Market St., Suite 810ldc at ldc.upenn.edu
Philadelphia, PA 19104 USAhttp://www.ldc.upenn.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/corpora/attachments/20130524/9c92110d/attachment.htm>
-------------- next part --------------
_______________________________________________
UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora
More information about the Corpora
mailing list