[Corpora-List] News from LDC

Linguistic Data Consortium ldc at ldc.upenn.edu
Mon Aug 23 17:30:34 UTC 2010


/In this newsletter:
/

- *Fall 2010 LDC Data Scholarship Program <#scholar>* -

*- **New Providing Guideline*s <8#provide>* -*

/New publications:/

LDC2010S05
*- Asian Elephant Vocalizations <#elephant>** -
*

LDC2010T14
*- **NIST 2005 Open Machine Translation (OpenMT) Evaluation <#openmt>** -*

LDC2010V02
*- TRECVID 2006 Keyframes <#trecvid>** -
*
*
*
------------------------------------------------------------------------
*
*

*Fall 2010 LDC Data Scholarship Program*

Applications are now being accepted through September 15, 2010 for the 
Fall 2010 LDC Data Scholarship program!   The LDC Data Scholarship 
program provides university students with access to LDC data at 
no-cost.  Data scholarships are offered twice a year to correspond to 
the Fall and Spring semesters, beginning with the Fall 2010 semester 
(September - December 2010). Several students can be awarded 
scholarships during each program cycle.  This program is open to 
students pursuing both undergraduate and graduate studies in an 
accredited college or university. LDC Data Scholarships are not 
restricted to any particular field of study; however, students must 
demonstrate a well-developed research agenda and a bona fide inability 
to pay. 

The application consists of two parts:

(1) /*Data Use Proposal*/. Applicants must submit a proposal describing 
their intended use of the data. The proposal must contain the 
applicant's name, university, and field of study. The proposal should 
state which data the student plans to use and contain a description of 
their research project.  Students are advised to consult the LDC Corpus 
Catalog <http://www.ldc.upenn.edu/Catalog/index.jsp> for a complete list 
of data distributed by LDC. Due to certain restrictions, a handful of 
LDC corpora are restricted to members of the Consortium.

(2) /*Letter of Support*/. Applicants must submit one letter of support 
from their thesis advisor or department chair. The letter must verify 
the student's need for data and confirm that the department or 
university lacks the funding to pay the full Non-member Fee for the data.

For further information on application materials and program rules, 
please visit the LDC Data Scholarship 
<http://www.ldc.upenn.edu/About/scholarships.html> page. 

Students can email their applications to the LDC Data Scholarship 
program <mailto:datascholarships at ldc.upenn.edu>. Decisions will be sent 
by email from the same address.

The deadline for the Fall 2010 program cycle is September 15, 2010.

Track the LDC Data Scholarship program at WikiCFP <http://www.wikicfp.com/>!

[ top <#top>]

*New Providing Guidelines*

LDC is pleased to announce that our Providing 
<http://www.ldc.upenn.edu/Providing/> page has been recently updated and 
enhanced to reflect detailed guidelines for submitting corpora and other 
resources for publication by LDC. The new Providing page describes the 
entire process of sharing data through LDC from the initial publication 
inquiry to delivery of the data for publication. LDC's preferred 
submission formats for video, audio, and text data and directory 
structure, and best practices for file naming conventions are covered in 
depth.  The page also includes information on providing adequate 
metadata and documentation of your data set.

Researchers interested in publishing data through LDC are invited to use 
the Publication Inquiry Form 
<http://www.ldc.upenn.edu/Providing/subform.html>.  The inquiry form 
will prompt you for basic information about your data including title, 
author, language, details on corpus size and format, as well as a 
description.  Once your inquiry has been received, our External 
Relations staff can assist you through each step of the publication process.

Why share your data through LDC?  Resources distributed by LDC reach a 
global audience. All published resources appear in LDC's online Catalog 
<http://www.ldc.upenn.edu/Catalog>, which is accessed daily by users 
worldwide. LDC's monthly newsletter keeps the community abreast of all 
new publications, and its reach ensures the attention of interested 
researchers. LDC members receive copies of the corpora as part of their 
membership benefits. LDC's Membership structure therefore guarantees 
your data greater exposure to major organizations working in human 
language technologies  and related fields.

The LDC Corpus Catalog contains a variety of resources in many languages 
and formats ranging from written to spoken and video. Speech and video 
data may derive from broadcast collections, interviews, and recordings 
of telephone conversations. Text data comes from a variety of sources 
including newswire, document archives and anthologies as well as the 
World Wide Web. LDC also publishes dictionaries and lexicons in a 
variety of languages.

[ top <#top>]

*New Publications*

(1)  Asian Elephant Vocalizations 
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2010S05> 
consists of 57.5 hours of audio recordings of vocalizations by Asian 
Elephants (/Elephas maximus/) in the Uda Walawe National Park, Sri 
Lanka, of which 31.25 hours have been annotated. The collection and 
annotation of the recordings was conducted and overseen by Shermin de 
Silva, of the University of Pennsylvania Department of Biology; voice 
recording field notes are of Shermin de Silva and Ashoka Ranjeewa. The 
recordings primarily feature adult female and juvenile elephants. 
Existing knowledge of acoustic communication in elephants is based 
primarily on African species (/Loxodonta africana/ and /Loxodonta 
cyclotis/). There has been comparatively less study of communication in 
Asian elephants.

This corpus is intended to enable researchers in acoustic communication 
to evaluate acoustic features and repertoire diversity of the recorded 
population. Of particular interest is whether there may be regional 
dialects that differ among Asian elephant populations in the wild and in 
captivity. A second interest is in whether structural commonalities 
exist between this and other species that shed light on underlying 
social and ecological factors shaping communication systems.

Data were collected from May, 2006 to December, 2007. Observations were 
performed by vehicle during park hours from 0600 to 1830 h. Most 
recordings of vocalizations were made using an Earthworks QTC50 
microphone shock-mounted inside a Rycote Zeppelin windshield, via a 
Fostex FR-2 field recorder (24-bit sample size, sampling rate 48 kHz). 
Recordings were initiated at the start of a call with a 10-s pre-record 
buffer so that the entire call was captured and loss of rare 
vocalizations minimized. This was made possible with the 'pre-record' 
feature of the Fostex, which records continuously, but only saves the 
file with a 10-second lead once the 'record' button is depressed.

Certain audio files were manually annotated, to the extent possible, 
with call type, caller id, and miscellaneous notes. For call type 
annotation, there are three main categories of vocalizations: those that 
show clear fundamental frequencies (periodic), those that do not 
(a-periodic), and those that show periodic and a-periodic regions as at 
least two distinct segments. Calls were identified as belonging to one 
of 14 categories.  Annotations were made using the Praat TextGrid Editor 
<http://www.fon.hum.uva.nl/praat/manual/TextGridEditor.html>, which 
allows spectral analysis and annotation of audio files with overlapping 
events. Annotations were based on written and audio-recorded field 
notes, and in some cases video recordings. Miscellaneous notes are 
free-form, and include such information as distance from source, caller 
identity certainty, and accompanying behavior.


[ top <#top>]

*

(2)  NIST 2005 Open Machine Translation (OpenMT) Evaluation 
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2010T14> 
is a package containing source data, reference translations, and scoring 
software used in the NIST 2005 OpenMT evaluation. It is designed to help 
evaluate the effectiveness of machine translation systems. The package 
was compiled and scoring software was developed by researchers at NIST, 
making use of newswire source data and reference translations collected 
and developed by LDC.

The objective of the NIST OpenMT evaluation series is to support 
research in, and help advance the state of the art of, machine 
translation (MT) technologies -- technologies that translate text 
between human languages. Input may include all forms of text. The goal 
is for the output to be an adequate and fluent translation of the 
original.  The 2004 task was to evaluate translation from Chinese to 
English and from Arabic to English. Additional information about these 
evaluations may be found at the NIST Open Machine Translation (OpenMT) 
Evaluation web site <http://www.itl.nist.gov/iad/mig/tests/mt/>.

This evaluation kit includes a single perl script (mteval-v11a.pl) that 
may be used to produce a translation quality score for one (or more) MT 
systems. The script works by comparing the system output translation 
with a set of (expert) reference translations of the same source text. 
Comparison is based on finding sequences of words in the reference 
translations that match word sequences in the system output translation.

This corpus consists of 100 Arabic newswire documents, 100 Chinese 
newswire documents, and a corresponding set of four separate human 
expert reference translations. Source text for both languages was 
collected from Agence France-Presse and Xinhua News Agency in December 
2004 and January 2005.

For each language, the test set consists of two files: a source and a 
reference file. Each reference file contains four independent 
translations of the data set. The evaluation year, source language, test 
set, version of the data, and source vs. reference file are reflected in 
the file name.

[ top <#top>]



*

(3)  TRECVID 2006 Keyframes 
<http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2010V02> 
was developed as a collaborative effort between researchers at LDC, NIST 
<http://www.nist.gov/>, LIMSI-CNRS <http://www.limsi.fr/>, and Dublin 
City University <http://www.dcu.ie/>  TREC Video Retrieval Evaluation 
(TRECVID) is sponsored by the National Institute of Standards and 
Technology (NIST) to promote progress in content-based retrieval from 
digital video via open, metrics-based evaluation. The keyframes in this 
release were extracted for use in the NIST TRECVID 2006 Evaluation.

TRECVID is a laboratory-style evaluation that attempts to model real 
world situations or significant component tasks involved in such 
situations. In 2006 TRECVID  completed a 2-year cycle on English, 
Arabic, and Chinese news video. There weree three system tasks and 
associated tests:

    * shot boundary determination
    * high-level feature extraction
    * search (interactive, manually-assisted, and/or fully automatic)

For a detailed description of the TRECVID Evaluation Tasks, please refer 
to the NIST TRECVID 2006 Evaluation Description. 
<http://www-nlpir.nist.gov/projects/tv2006/>

The video stills that compose this corpus are drawn from approximately 
158.6 hours of English, Arabic, and Chinese language video data 
collected by LDC from NBC, CNN, MSNBC, New Tang Dynasty TV, Phoenix TV, 
Lebanese Broadcasting Corp., and China Central TV.

Shots are fundamental units of video, useful for higher-level 
processing. To create the master list of shots, the video was segmented. 
The results of this pass are called subshots. Because the master shot 
reference is designed for use in manual assessment, a second pass over 
the segmentation was made to create the master shots of at least 2 
seconds in length. These master shots are the ones used in submitting 
results for the feature and search tasks in the evaluation. In the 
second pass, starting at the beginning of each file, the subshots were 
aggregated, if necessary, until the current shot was at least 2 seconds 
in duration, at which point the aggregation began anew with the next 
subshot.

The keyframes were selected by going to the middle frame of the shot 
boundary, then parsing left and right of that frame to locate the 
nearest I-Frame. This then became the keyframe and was extracted. 
Keyframes have been provided at both the subshot (NRKF) and master shot 
(RKF) levels.



[ top <#top>]

------------------------------------------------------------------------


Ilya Ahtaridis
Membership Coordinator
--------------------------------------------------------------------
Linguistic Data Consortium                     Phone: (215) 573-1275
University of Pennsylvania                       Fax: (215) 573-2175
3600 Market St., Suite 810                         ldc at ldc.upenn.edu
Philadelphia, PA 19104 USA                  http://www.ldc.upenn.edu

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/corpora/attachments/20100823/742feee1/attachment.htm>
-------------- next part --------------
_______________________________________________
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora


More information about the Corpora mailing list