<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; ">Rob<DIV>just for the record (because John cares about these things), I didnt mean to equate decidability to completeness, only to say that the second is a necessary condition for the first for a calculus (but not vice versa).</DIV><DIV>best</DIV><DIV>YW</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><DIV><DIV>On 9 Sep 2007, at 06:52, Rob Freeman wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite">John,<BR><BR>You'll confuse the issue with so many words.<BR><BR>For "completeness" I am happy to agree with Yorick Wilks and equate it with "decidability". I'm indebted to Yorick for pointing out this was how the problem was seen by generativists. <BR><BR>What it means to be "computable" was first defined by Alan Turing (and Alonzo Church?) I do not intend my sense to differ in any way.<BR><BR>The question of decidability is a technical one within this framework. According to Turing's theory there are computable problems which are not decidable. It is not a question of adding more information, "semantic" or otherwise, to make them decidable. They are not decidable because they have too much power, not too little. <BR><BR>I am suggesting natural language might be such a system.<BR><BR>That would not be a bad thing by the way. Decidability acts as a kind of straitjacket on computability. It is a limitation on its power. A generally computable model of natural language would be more powerful than a decidable model. It could be powerful enough to account for the detail of collocation and phraseology, for instance. <BR><BR>To get that power we would only need to lose the ability to _label_ language definitively. That is the content of decidability: the ability to fit language to a grammar, nothing more. I personally would not be bothered it if turned out that tags and tree-banks were officially meaningless, and corpora the most complete description of a language possible, especially if that meant we could recognize speech accurately, and index information effectively. <BR><BR>Anyway, I think the possibility is worth considering.<BR><BR>-Rob<BR><BR> On 9/9/07, <B class="gmail_sendername">John F. Sowa</B> <<A href="mailto:sowa@bestweb.net">sowa@bestweb.net</A>> wrote:<DIV><SPAN class="gmail_quote"> </SPAN><BLOCKQUOTE class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Rob,<BR><BR>The original definition of "generative grammar", which is used<BR> for formal languages, very explicit defines "completeness":<BR><BR> A language L is defined as the set of all and only those<BR> sentences that can be generated (or parsed) by a grammar G.<BR><BR>This definition has proved to be very useful for artificial <BR>languages, such as programming languages and formal logics.<BR><BR>But it quickly became obvious that no grammar and parser could<BR>come anywhere close to generating or parsing all and only the<BR>sentences commonly used in any NL. Therefore, Chomsky qualified <BR>it by saying that G would only describe the "competence" of an<BR>"ideal" speaker, not the performance of any actual speaker.<BR><BR>But even that definition is woefully inadequate, because there<BR> is no grammar/parser combination in existence today that can<BR>correctly parse more than about 50% of the sentences published<BR>in well-edited texts. (Many parsers can produce parses for more<BR>than 50% of the sentences, but if you eliminate any parse that <BR>has one or more errors, as judged by a competent linguist, even<BR>the best have difficulty in reaching 50% completely correct.)<BR><BR> > Take the opposite point of view. Assume only that language is<BR> > generally computable. Then it may be undecidable. <BR><BR>I don't know what you mean by "computable". But the question<BR>of undecidability is trivial to show for any NL grammar in<BR>existence today. Just pick up any any well-edited book, magazine,<BR>or newspaper you can find around the house. Then run the sentences <BR>from the first page through the parser. That will demonstrate<BR>that at least 99% of the grammars fail on a small finite set.<BR>In the unlikely event that one of the parsers actually produces<BR>correct parses for all the sentences, just try it on the next <BR>book, magazine, or newspaper.<BR><BR>By the way, you can get higher percentages of correct parses *if*<BR>you supplement the grammar with semantic and pragmatic tests.<BR>But that is harder to implement, and it violates Chomsky's <BR>assumption of the autonomy of syntax.<BR><BR>John<BR><BR></BLOCKQUOTE></DIV><BR><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">_______________________________________________</DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">Corpora mailing list</DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><A href="mailto:Corpora@uib.no">Corpora@uib.no</A></DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><A href="http://mailman.uib.no/listinfo/corpora">http://mailman.uib.no/listinfo/corpora</A></DIV> </BLOCKQUOTE></DIV><BR></DIV></BODY></HTML>