<div dir="ltr">Linas,<br><br>I guess my response should have been characterized as "beginner" instead of a "common". ;)<br><br>Thanks for filling in with the more advanced techniques!<br><br>-Dan<br><br>
<br><div class="gmail_quote">On Mon, Sep 8, 2008 at 1:30 PM, Linas Vepstas <span dir="ltr"><<a href="mailto:linasvepstas@gmail.com">linasvepstas@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
2008/9/7 Dan Garrette <<a href="mailto:dhgarrette@gmail.com">dhgarrette@gmail.com</a>>:<br>
<div class="Ih2E3d">> Vrone,<br>
><br>
> The most common way to turn syntax into semantics is by defining logical<br>
> expressions for each part of speech in a sentence and then composing those<br>
> parts to make the meaning of the whole. For instance, to find the meaning<br>
> of the sentence "John sees a man" we can start by assigning a logical<br>
> meaning of each word in the sentence:<br>
<br>
</div>Surely this is the "least common" way of doing things, as<br>
it completely ignores both traditional work on parsing, as<br>
well as being uninformed by any sort of corpus statistics.<br>
One usually finishes by assigning meaning, not starts.<br>
<br>
The most "common" way might be to employ a number<br>
nested, layered techniques, from part of speech taggers<br>
and morphology stemmers, to various phrase structure<br>
or dependency grammars to obtain relations, for example<br>
<br>
subj(John, sees) # who is seeing?<br>
obj (a man, sees) # what are they seeing?<br>
<br>
Meaning is provided by both the structure of the sentence,<br>
plus prior knowledge that the word "John" might<br>
be a man, or might be a toilet, and "see" might be "view"<br>
or it might be "accept visitors", so that "John sees a man"<br>
might be a funny way of saying "the toilet is accepting<br>
visitors".<br>
<br>
Importantly, one can make rather good progress<br>
by abandoning syntactic structure completely; see<br>
for example Radu Mihalcea's work on word-sense<br>
disambiguation, which, in a nutshell, solves a Markov<br>
chain on word senses. There's not a drop of grammar<br>
or parsing in it (or first-order logic either). Its a solid<br>
result which should cause anyone working on<br>
this-n-such theory of grammar to stop, pull their head<br>
out of the sand, and re-evaluate their strategy.<br>
<br>
The work described in <a href="http://nltk.org/doc/en/ch11.html" target="_blank">http://nltk.org/doc/en/ch11.html</a><br>
is curious, but I'd think a much stronger approach would be<br>
to assign probabilities to each statement of first-order<br>
logic (and thus obtain, for example, a "Markov logic<br>
network", or a generalization, the PLN) Such probabilities<br>
would be computed from corpus analysis.<br>
<br>
I agree that "sense" can be considered to be the set of<br>
predicates and rules that were triggered during a parse.<br>
But the examples at that url also seems to make use of<br>
somewhat olde-fashioned ideas like NP and VP, when<br>
there in fact seem to be a much much richer and broader<br>
set of relationships between words, phrases, colligations, etc.<br>
e.g. the hundreds of dependency links and thousands of<br>
rules in link-grammar, to the thousands of framenet-style<br>
frames. I just don't see that a first-order logic will ever<br>
successfully capture this -- I'd say that Cyc illustrates what<br>
the limit of that technique is.<br>
<font color="#888888"><br>
--linas<br>
</font></blockquote></div><br></div>