Glue language notation thoughts

Avery D Andrews u7600202 at LEONARD.ANU.EDU.AU
Mon Nov 18 03:16:28 UTC 2002


>Hi Avery,
>
>It's good to know that people are looking into glue implementations.
>
<general remarks and macro stuff snipped)
>
>I have a few questions and comments about your proposed notation.
>
>First, it seems to have dispensed with the explicit typing of
>resources as either type e or type t.  This information is actually
>necessary, otherwise you'll find quantifiers scoping in all sorts of
>places where they shouldn't, and taking type e resources as their
>body. This is of course readily fixed, and I assume was left out for
>presentational reasons.

Yes, that's why it wasn't there

>Second, I take it that the principal motivation behind your notation
was
>to deal with `new' glue's "correlation between an arbitrary order of
>lambda abstractions on the left, and the order of implications on the
>right."  This correlated separation is in fact an important component
>of the new glue.  For one thing, it makes it clear at there is
>nothing magical about argument order, and that it really is arbitrary
>whether, e.g., you consume subject arguments before object arguments
>in a glue derivation, or vice versa.  (This is not to say that there
>are not things like grammatical role hierarchies, only that they are
>not significant to how lexical entries get written or used in a glue
>derivation).

I see the left-right separation as good, but the dependency on
correlated
orders as bad, due to being hard to read.

>I think there are two dangers with notation like
>
>   like(@f_sig, @g_sig) : h
>
>(1) It blurs the separation between the meaning language and the
>    linear logic glue.  This separation allows one to look at glue
>    derivations as abstract objects in their own right, independently
>    of the details of any meaning language.  This not only leads to
>    more efficient and general implementations, but Ash Asudeh and I
>    have also been looking at the structure of these derivations as a
>    way of assessing semantic parallelism in ellipsis and coordination
>    independently of particular meanings.

Yes, so my revised proposal is (type info included):

  like(X,Y) : sig(h)~t o- {X:sig(f)~e, Y:sig(g)~e}.
  seek(X,P) : sig(h)~t o- {X:sig(f)~e, Y:(sig(g)~e -o sig(H)~t) -o
sig(H)~t}.
  etc.

The motivations for the layout are:

  -each semantic structure desigator is separated by : from the formula
or
     variable it designates, so the association between roles and
fillers
     is by collocation and coindexing rather than the putatively
hard-to-read
     correlation between linear order positions.
  -the braces suggests an unordered set (or Python dictionary)

This transforms easily into 'new glue' (the next Baby Glue is should be
able to take constructors in both notations (tho with some typographical
alternations in honor of the prolog term reader), and there are various
further
ways in which macros can be applied.  It does seem to me that
readability
takes a big hit from having the sig's all over the place, so I'd like a
way
to get rid of them, but don't have one.

>(2) The notation also tempts one into thinking that there really is
>    some significance to the ordering of, say, subjects and objects in
>    glue derivations.

I don't see how my first proposal does this; hopefully the revised one
does so less.

>The business of arbitrary argument ordering maybe needs spelling out
<actual discussion snipped>

That's certainly an important point, which should perhaps be played
up more in introductory expositions: glue language semantic derivation
is order independent to pretty much the same extent than f-structure
building is.  I think of it as a sort of 'automagical' effect of the use
of a
logic rather than a random collection of rewriting rules, such as
an intepretive semanticist in the 70s might have envisioned. One thing
that's hard to understand about the whole (I don't think I really
understand
it yet) is where the benefit of using logic as opposed to random
rewriting rules really comes; I can grasp various specific beneficial
resuots, but still don't have any real sense of what is actually
responsible for them.

>In particular, if you were to perform glue
>derivations on your lexical entries, you would probably find yourself
>(i) separating them out into meaning and glue, (ii) doing
>propositional linear logic inference on the glue side, and (iii)
>assembling the meaning terms via the Curry-Howard isomorphism applied
>to the glue derivation.

That's sort of how it presently works, tho it's not done yet -- might
well be totally like that when it is done.

>For debugging purposes, this would make it
>hard to relate the actual (separated) derivations to the (mixed)
>lexical entries that the grammar writer has provided.   And for
>anything other than toy examples, debugging is important...

But i think that the revised version is close enough to new glue so
that I wouldn't expect this to be a problem.

>Hope these comments (some of which are born of bitter experience) are
>some use.

Yes, they certainly have been; Thanks!

>Regards,
>
>Dick Crouch


 - Avery Andrews



More information about the LFG mailing list