On 9/12/07, <b class="gmail_sendername">John F. Sowa</b> <<a href="mailto:sowa@bestweb.net">sowa@bestweb.net</a>> wrote:<div><span class="gmail_quote"></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br></blockquote><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> I think we agree on the following points:<br><br> 1. When a system (human or computer) finds an unrecognizable
<br> grammatical pattern, it should make its best effort to find<br> some interpretation for it.</blockquote><div><br>This sounds like ad-hoc generalization, so this is what I think we need, yes. But in my view this is not an error coping mechanism, it is the mechanism. This is syntax. Syntax is the process of finding new generalizations to justify new combinations of words.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> 2. But it should also keep some kind of record of the original<br> pattern and the interpretation made. That is necessary
<br> in order to make a generalization, if the same or similar<br> pattern occurs again.</blockquote><div><br>The combination is stored. But this is the path to lexicon. Everything stored has the nature of lexicon. There is no "grammar" as such. There is only the tendency to repetition, which is lexicon, and the ability to make new (context specific) generalizations, which is syntax.
</div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> 3. If another unrecognizable pattern comes along, the system<br> should check whether there were earlier patterns like it
<br> in whatever storage is used for "nonce grammar" instances.</blockquote><div><br>An individual (context specific) generalization should be stored in case of repetition, I grant you. Though once again this is best seen as lexicon, not syntax. Syntax should be seen as the act of new (context specific) generalization.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> 4. As more examples of a nonce-grammar pattern accumulate,<br> its status increases from "probable error" to "temporary
<br> innovation" to "common in the genre" to "standard".</blockquote><div><br>I don't see the path as "probable error" to "temporary innovation" to "standard". I see the path as "novel generalization" to "repeated generalization" to, eventually, ossified generalization in lexicon (which is often no longer justified by generalizations in the wider language.)
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> 5. The above points imply that some kind of storage is<br> required for every unrecognized pattern -- at least
<br> until it has been assimilated into some encoding that<br> is similar in nature to the encoding of whatever is<br> typically called the "standard" grammar.</blockquote><div><br>Every pattern is recognized, more or less. That is what syntax does, it makes new generalizations. As each new pattern is repeated it becomes assimilated. That assimilation is what we call lexicon.
<br><br>But if you are valuing ad-hoc generalization, great. That is what I think we need. Call it an "error coping mechanism" if you will.<br><br>Basically, to sum up, if we model syntax as ad-hoc generalization over a corpus of examples, I think we can solve it. The point of view I've been trying to present here is that we have failed to model syntax effectively because we have assumed grammatical generalizations over corpora must be complete.
<br><br>Drop this one assumption, and I think we will start to get good results immediately.<br><br>I hope people reading this will now have at least an awareness of that idea.<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
RF> There is a vague idea we have to merge lexical and syntactic<br> > aspects of text, but no one has a clue how to do that.<br><br>I would say that there are many clues, many proposals for doing<br>different kinds of mergers, but not enough evidence to make a
<br>good recommendation about which one(s) to choose.<br><br>I am encouraged by a steady stream of recent publications<br>that indicate the "mainstream" is creeping along in this<br>direction. Following are a few (check Google for full ref's):
<br><br> - _Simpler Syntax_ by Culicover & Jackendoff (2005) is a<br> recognition by long-time Chomskyans that a major overhaul<br> is long overdue. However, they are still trying to preserve<br> a very large part of the results obtained by the Chomskyan
<br> linguists in a way that is fairly conservative.<br><br> - _Dynamic Syntax_ by Kempson, Myer-Viol, & Gabbay (2001)<br> is a more radical approach to syntax, but the semantic<br> theory by Gabbay is a very formal logic-based approach.
<br> Gabbay uses "decorated trees" instead of a linear notation<br> for the logic, which I like, since conceptual graphs can<br> be viewed as "decorated trees" glued together in similar ways.<br>
But I believe the logic should be as dynamic as the syntax.<br><br> - _Cognitive Linguistics_ by Croft & Cruse (2004) combines<br> radical construction grammar (RCG) with lexical semantics<br> in a way that makes both more dynamic than the above approaches.
<br> RCG does allow syntax to evolve from more primitive patterns,<br> but Croft and Cruse don't say how it would be possible for<br> logic to evolve.</blockquote><div><br>Thanks for the references. Any new attempt is to be valued. When the state-of-the-art is clearly flawed innovation must be the norm.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">The fact that children by the age of 3 use words for the logical<br>operators (
e.g., 'and', 'not', 'some', and others) indicates that<br>logic somehow evolves out of the infant's early one and two-word<br>phrases. And the fact that all mathematicians, logicians, and<br>computer programmers use NLs to explain the most abstruse theories
<br>imaginable indicates that there is no limit to how far the<br>expressive power can evolve.</blockquote><div><br>This is by way of a new topic. They are related, and I'm interested in it, but I think I'll post it up as another thread and leave this one for any remaining quibbles about grammatical incompleteness.
<br><br>-Rob<br> </div></div>