14.659, Disc: New: Proof of Correctness in Best-First Parsing

LINGUIST List linguist at linguistlist.org
Sat Mar 8 03:03:26 UTC 2003


LINGUIST List:  Vol-14-659. Fri Mar 7 2003. ISSN: 1068-4875.

Subject: 14.659, Disc: New: Proof of Correctness in Best-First Parsing

Moderators: Anthony Aristar, Wayne State U.<aristar at linguistlist.org>
            Helen Dry, Eastern Michigan U. <hdry at linguistlist.org>

Reviews (reviews at linguistlist.org):
	Simin Karimi, U. of Arizona
	Terence Langendoen, U. of Arizona

Home Page:  http://linguistlist.org/

The LINGUIST List is funded by Eastern Michigan University, Wayne
State University, and donations from subscribers and publishers.

Editor for this issue: Karen Milligan <karen at linguistlist.org>
 ==========================================================================
FUND DRIVE 2003

Please help us reach our total of $50,000 by making a donation at:

http://linguistlist.org/donation.html

The LINGUIST List depends on the generous contributions from
subscribers like you; we would not be able to operate without your
help.

The moderators, staff, and student editors at LINGUIST would like to
take this opportunity to thank you for your continuous support.

To post to LINGUIST, use our convenient web form at
http://linguistlist.org/LL/posttolinguist.html.
=================================Directory=================================

1)
Date:  Fri, 07 Mar 2003 07:53:42 +0000
From:  Liang Huang <lhuang at sjtu.edu.cn>
Subject:  Proof of Correctness in Best-First Parsing

-------------------------------- Message 1 -------------------------------

Date:  Fri, 07 Mar 2003 07:53:42 +0000
From:  Liang Huang <lhuang at sjtu.edu.cn>
Subject:  Proof of Correctness in Best-First Parsing

In Best-First (or Uniform-cost) search, we should ensure that items
are popped (from the priority-queue) in a non-increasing order of
merit (probability as for parsing).

But for probabilistic Earley parsing (Stolcke, 1995), actually this
monotonical order is broken by the ''prediction step'': (i, j, A ->
alpha .B beta, p) --> (j, j, B -> .gamma, prob(B->.gamma)

here the probability of the rule B->.gamma may be higher than
p. That's just like using Dijkstra to a graph containing negative
edges (but of course no negative cycles).

Do you know anybody having proved the correctness of this algorithm?
Or do you have any other derivation that ensures monotonicity?

One possible solution is to change the ''predication step'' into:

 (i, j, A -> alpha .B beta, p) --> (j, j, B -> .gamma, p *
prob(B->.gamma).  In this way, no negative edges. But it is not
elegant at all. One should remember the p for each predicted state.

I'd appreciate it very much if you could discuss this issue with me.

Thanks,
Liang Huang (Bill)
Shanghai Jiao Tong University

---------------------------------------------------------------------------
LINGUIST List: Vol-14-659



More information about the LINGUIST mailing list