<div>I strongly agree your knowledge representation model. Actually, I want to do the similar thing. The bottleneck currently is how can we get the high-precision results of coreference resolution to build completed Conceptual Graphs from texts. So I focus on coreference resolution now.</div>
<div><br></div><div>I believe the precision of coreference resolution is more important than recall and f-measure, especially for the secondary training set in simi-supervised learning. </div><div><br></div><div><div class="gmail_quote">
On Sun, Mar 13, 2011 at 4:44 AM, John F. Sowa <span dir="ltr"><<a href="mailto:sowa@bestweb.net">sowa@bestweb.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On 3/12/2011 12:12 AM, Nathan Hu wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I believe graph based knowledge representation models are on the right<br>
direction.<br>
<br>
I proposed an instance based knowledge network and a graph matching<br>
based method for word sense disambiguation recently.<br>
(Incorporating Coreference Resolution into Word Sense Disambiguation -<br>
Cicling2011)<br>
<br>
I'd like to discuss with anyone interested at this TextGraphs topic<br>
</blockquote>
<br></div>
I agree that such methods are promising. And I very strongly agree<br>
with the opening sentences in the abstract of your article:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Word sense disambiguation (WSD) and coreference resolution are two<br>
fundamental tasks for natural language processing. Unfortunately, they<br>
are seldom studied together.In this paper, we propose to incorporate<br>
the coreference resolution technique into a word sense disambiguation<br>
system for improving disambiguation precision.<br>
</blockquote>
<br>
We do something similar at our company (VivoMind Research, LLC), and<br>
it has proved to very effective. In fact, we store and index all the<br>
graphs that have been generated from previous sentences in the same<br>
text or related texts in a corpus. Then we use high-speed methods<br>
to retrieve similar graphs in logarithmic time. We use them for<br>
both disambiguation and coreference resolution.<br>
<br>
The following slides show some applications that illustrate the methods:<br>
<br>
<a href="http://www.jfsowa.com/talks/pursue.pdf" target="_blank">http://www.jfsowa.com/talks/pursue.pdf</a><br><font color="#888888">
<br>
John<br>
<br>
<br>
</font></blockquote></div><br></div>