The syntax-semantics correspondence and underspecification
Carl Pollard
pollard at ling.ohio-state.edu
Tue Jul 6 09:55:26 UTC 2004
Hi Shalom,
>
I am puzzled by Carl's recent replies to Tibor. Carl you appear to
have returned to a classical Montague (PTQ) view of the
syntax-semantics correspondence in which differences in semantic
representation entail distinctions in syntactic structure. On this
approach variant scope readings are obtained from alternative
syntactic derivations/structures. Have you, then, given up
underspecified semantic representations of the MRS (or related)
variety which can be resolved to different interpretations but
correspond to a single syntactic source? If so, then what of the
advantages of underspecified representations, such as the avoidance of
spurious, unmotivated syntactic ambiguity, achieving greater
computational efficiency in the interpretation process by not
generating k! syntactic-semantic correspondences for k scope taking
elements in a sentence, etc.? If you have not given up underspecified
respresentations, how are they accommodated in the Lambek calculus
type grammar that you have sketched?
>>
The view I sketched is indeed neo-Montagovian, not the way he actually
did things in PTQ but they way he mentioned (in passing, in PTQ) that
he COULD have done it. The way he ACTUALLY did it was to make
translation a RELATION between strings and IL expressions, but he
pointed out that he COULD HAVE just as well have made it a FUNCTION from
analysis trees (which can be considered tectostructures) to IL
expressions.
But then as always one gets a choice about how to handle the
interpretive multiplicities (I resist calling them ambiguities, which
implies making the first of the following choices): either (1) give
different scopings the same tectostructure and have interpretations
themselves be underspecified entities a la MRS etc., or (2) have
disambiguating tectogrammatical operations (such as the scope
operators used in some kinds of CG) that make a semantic difference
but not a phenostructural one. The former has the well-known
computational advantages you allude to, but I have a hunch the former
(the "unmotivated spurious syntactic ambiguity" approach) might make
linguistic description easier.
[Of course categorial grammarians don't usually consider ambiguity of
this kind unmotivated and spurious, since they consider the main point
of tectostructure to be to drive semantic composition. Lambek even
goes so far as to say that Curry's tectostructure IS semantics not
syntax, but this seems to me a misreading of Curry.]
An analogous choice arises in the tecto-to-pheno interpretation: you
can either (1) put put less stuff into tecto and make phenostructures
underspecified, or (2) put more stuff into tecto to make the relation
to (fully specified) phenostructures a function (the usual categorial
approach). The Dowty-style minimalism we've been discussing
is of the former kind: the multisets of words (and frozen word
sequences) are the analogs of underspecified semantic representations,
and the LP rules are the analogs of "handle constraints" (or whatever
you want to call the constraints that embody the options for resolving
an underspecified semantic representation).
Carl
More information about the HPSG-L
mailing list