6.852, FYI: Cognitive Science Technical Reports

The Linguist List linguist at tam2000.tamu.edu
Fri Jun 23 16:36:05 UTC 1995


---------------------------------------------------------------------------
LINGUIST List:  Vol-6-852. Fri Jun 23 1995. ISSN: 1068-4875. Lines:  206
 
Subject: 6.852, FYI: Cognitive Science Technical Reports
 
Moderators: Anthony Rodrigues Aristar: Texas A&M U. <aristar at tam2000.tamu.edu>
            Helen Dry: Eastern Michigan U. <hdry at emunix.emich.edu>
 
Assoc. Editor: Ljuba Veselinova <lveselin at emunix.emich.edu>
Asst. Editors: Ron Reck <rreck at emunix.emich.edu>
               Ann Dizdar <dizdar at tam2000.tamu.edu>
               Annemarie Valdez <avaldez at emunix.emich.edu>
 
Editor for this issue: hdry at emunix.emich.edu (Helen Dry)
 
---------------------------------Directory-----------------------------------
1)
Date:  Mon, 24 Apr 1995 12:10:43 EDT
From:  jbkerper at central.cis.upenn.edu (Jodi Kerper)
Subject:  Cognitive Science Technical Reports
 
---------------------------------Messages------------------------------------
1)
Date:  Mon, 24 Apr 1995 12:10:43 EDT
From:  jbkerper at central.cis.upenn.edu (Jodi Kerper)
Subject:  Cognitive Science Technical Reports
 
The following new technical reports are now available from the Institute for
Research in Cognitive Science:
 
Probabilistic Matching of Brain Images
J.C. Gee
L. LeBriquer
C. Barillot
D.R. Haynor
IRCS-95-07
$2.20
 
Image matching has emerged as an important area of investigation in medical
image analysis.  In particular, much attention has been focused on the atlas
problem, in which a template representing the structural anatomy of the human
brain is deformed to match anatomic brain images from a given individual.  The
problem is made difficult because there are important differences in both the
gross and local morphology of the brain among normal individuals.  We have
formulated the image matching problem under a Bayesian framework.  The Bayesian
methodology facilitates a principled approach to the development of a matching
model.  Of special interest is its capacity to deal with uncertainty in the
estimates, a potentially important but generally ignored aspect of the
solution.  In the construction of a reference system for the human brain, the
Bayesian approach is well suited to the task of modeling variation in
morphology.  Statistical information about morphological variability,
accumulated over past samples, can be formally introduced into the problem
formulation to guide the matching or normalization of future data sets.
 
Bayesian Approach to the Brain Image Matching Problem
J.C. Gee
L. LeBriquer
C. Barillot
D.R. Haynor
R. Bajcsy
IRCS-95-08
$1.80
 
The application of image matching to the problem of localizing structural
anatomy in images of the human brain forms the specific aim of our work.  The
interpretation of such images is a difficult task for human observers because
of the many ways in which the identity of a given structure can be obscured.
Our approach is based on the assumption that a common topology underlies the
anatomy of normal individuals.  To the degree that this assumption holds, the
localization problem can be solved by determining the mapping from the anatomy
of a given individual to some referential atlas of cerebral anatomy.  Previous
such approaches have in many cases relied on a physical interpretation of this
mapping.  In this paper, we examine a more general Bayesian formulation of the
image matching problem and demonstrate the approach on two-dimensional magnetic
resonance images.
 
XTAG System - A Wide Coverage Grammar for English
Christy Doran
Dania Egedi
Beth Ann Hockey
B. Srinivas
Martin Zaidel
IRCS-95-09
$1.03
 
This paper presents the XTAG system, a grammar development tool based on the
Tree Adjoining Grammar (TAG) formalism that includes a wide-coverage syntactic
grammar for English. The various components of the system are discussed and
preliminary evaluation results from the parsing of various corpora are given.
Results from the comparison of XTAG against the IBM statistical parser and the
Alvey Natural Language Tool parser are also given.
 
Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing
Aravind K. Joshi
B. Srinivas
IRCS-95-10
$1.28
 
In a lexicalized grammar formalism such as Lexicalized Tree-Adjoining Grammar
(LTAG), each lexical item is associated with at least one elementary structure
(supertag) that localizes syntactic and semantic dependencies. Thus a parser
for a lexicalized grammar must search a large set of supertags to choose the
right ones to combine for the parse of the sentence. We present techniques for
disambiguating supertags using local information such as lexical preference and
local lexical dependencies. The similarity between LTAG and Dependency grammars
is exploited in the dependency model of supertag disambiguation. The
performance results for various models of supertag disambiguation such as
unigram, trigram and dependency-based models are presented.
 
A Freely Available Syntactic Lexicon for English
Dania Egedi
Patrick Martin
IRCS-95-11
$1.18
 
This paper presents a syntactic lexicon for English that was originally derived
from the Oxford Advanced Learner's Dictionary and the Oxford Dictionary of
Current Idiomatic English, and then modified and augmented by hand. There are
more than 37,000 syntactic entries from all 8 parts of speech. An X-windows
based tool is available for maintaining the lexicon and performing searches. C
and Lisp hooks are also available so that the lexicon can be easily utilized by
parsers and other programs.
 
Lexicalization and Grammar Development
B. Srinivas
Dania Egedi
Christy Doran
Tilman Becker
IRCS-95-12
$1.18
 
In this paper we present a fully lexicalized grammar formalism as a
particularly attractive framework for the specification of natural language
grammars. We discuss in detail Feature-based, Lexicalized Tree Adjoining
Grammars (FB-LTAGs), a representative of the class of lexicalized grammars. We
illustrate the advantages of lexicalized grammars in various contexts of
natural language processing, ranging from wide-coverage grammar development to
parsing and machine translation. We also present a method for compact and
efficient representation of lexicalized trees.
 
A Processing Model for Free Word Order Languages
Owen Rambow
Aravind K. Joshi
IRCS-95-13
$2.00
 
Like many verb-final languages, German displays considerable word-order
freedom: there is no syntactic constraint on the ordering of the nominal
arguments of a verb, as long as the verb remains in final position. This effect
is referred to as ``scrambling'', and is interpreted in transformational
frameworks as leftward movement of the arguments. Furthermore, arguments from
an embedded clause may move out of their clause; this effect is referred to as
``long-distance scrambling''. While scrambling has recently received
considerable attention in the syntactic literature, the status of long-distance
scrambling has only rarely been addressed. The reason for this is the
problematic status of the data: not only is long-distance scrambling highly
dependent on pragmatic context, it also is strongly subject to degradation due
to processing constraints. As in the case of center-embedding, it is not
immediately clear whether to assume that observed unacceptability of highly
complex sentences is due to grammatical restrictions, or whether we should
assume that the competence grammar does not place any restrictions on
scrambling (and that, therefore, all such sentences are in fact grammatical),
and the unacceptability of some (or most) of the grammatically possible word
orders is due to processing limitations. In this paper, we will argue for the
second view by presenting a processing model for German.
 
 
****************************************************************************
How to access reports:
 
The reports are available in bound form for the price listed above, or may be
obtained for free, electronically.
 
To obtain a compressed postscript copy of the report, open an anonymous ftp
session on
 
ftp.cis.upenn.edu
path: pub/ircs/technical-reports
 
The files are named according to their number.  For example, Report 95-01 is
stored as 95-01.ps.Z, 95-02 is stored as 95-02.ps.Z, etc.
 
If you are using ftp, change the setting to binary and download the file. To
get a copy of Report 95-01, you would type:
 
binary
get 95-01.ps.Z
 
You can also obtain files through electronic mail.  Send a mail message to
ircsserv at ftp.cis.upenn.edu.  The message should read "send technical-reports
filename". You will receive the compressed postscript file in reply.
 
Requests for bound copies should be sent to the address listed below, and
include a check for the price of the desired report. Checks should be
made payable to "Trustees of the University of Pennsylvania."
 
 
Jodi Kerper             jbkerper at central.cis.upenn.edu
 
Institute for Research in Cognitive Science
3401 Walnut Street, Suite 400C
Philadelphia, PA  19104-6228
 
 
------------------------------------------------------------------------
LINGUIST List: Vol-6-852.



More information about the LINGUIST mailing list