From annes at HTDC.ORG Thu Jan 2 21:13:54 1997 From: annes at HTDC.ORG (Anne Sing) Date: Thu, 2 Jan 1997 11:13:54 -1000 Subject: PARSER COMPARISON Message-ID: Before going into a detailed discussion of the points made in Daniel Sleators last message, I would like to point out that the one point he has not made is that his parser cannot conform to the majority of all three areas of our challenge except through assertion. Taking a look at his web site you will find a parser that will provide a complex tree that can only be read by those who have studied his Link theory of grammar. There is a minor bit of labelling of parts of speech, but there is no indication, he can accurately label subjects verbs, objects and complements of any sort. There is also no indication that he can return multiple strings of ambiguous sentences ("John saw a man in the park" or "the Chinese Insructor.") Most importantly, there is absolutely no indication that he can do anything with the trees that he provides; e.g. can he change actives to passives, statements to questions or answer questions? The answer is probably no but we simply do not know. A parser that merely produces a set of obscure trees is of value to no one. This is precisely the reason that Derek and I have issued this challenge. If a parser is parsing it should be able to meet the minimum requirements of our challenge in a way that ANYONE can see, not just those who are the insiders of a particular theory or those with a strong background in syntax. Parsing that merely results in a labelled bracketing or a tree structure is nothing more than an intellectual exercise. DO SOMETHING WITH IT THAT ALL CAN JUDGE OR KEEP IT OUT OF THE RUNNING UNTIL YOU CAN is the motto we bring to this challenge. Particularly, put something on the web that indicates that you can do something of value such as ask and answer questions or correct grammar. These functions absolutely require that you can accurately handle all the items listed in our challenge or you can't do them at all. Certainly, in the private sector, individuals who have to make decisions concerning parsers are not going to be linguists. Also, you cannot ask them to get a degree in linguistics before they are qualified to make such decisions. If that is a requirement companies are simply not going to participate. HERE IS A MORE DETAILED RESPONSE TO DANIEL SLEATOR'S MESSAGE At 11:34 AM 1/1/97 -1000, Daniel Sleator wrote: He begins with a review of our challenge. >Philip Bralich suggests that those of us working in the area pf parsing >should make our systems available via the web. Davy Temperley and I are >in full agreement with this. That's why a demonstration of our link >grammar system has been up on the web for over a year. Go to >"www.cs.cmu.edu/~sleator" and click on "link grammar" to get to the >parser page. > >Philip has also proposed a set of criteria by which parsing systems can >be judged: > >> In addition to using a dictionary that is at least 25,000 words in >> size and working in real time and handling sentences up to 12 or 14 >> words in length (the size required for most commercial applications), >> we suggest that parsers should also meet the following standards >> before engaging this challenge: >> >> At a minimum, from the point of view of the STRUCTURAL ANALYSIS OF >> STRINGS, the parser should:, 1) identify parts of speech, 2) identify >> parts of sentence, 3) identify internal clauses, 4) identify sentence >> type (without using punctuation), and 5) identify tense and voice in >> main and internal clauses. >> >> At a minimum from the point of view of EVALUATION OF STRINGS, the >> parser should: 1) recognize acceptable strings, 2) reject unacceptable >> strings, 3) give the number of correct parses identified, 4) identify >> what sort of items succeeded (e.g. sentences, noun phrases, adjective >> phrases, etc), 5) give the number of unacceptable parses that were >> tried, and 6) give the exact time of the parse in seconds. >> >> At a minimum, from the point of view of MANIPULATION OF STRINGS, the >> parser should: 1) change questions to statements and statements to >> questions, 2) change actives to passives in statements and questions >> and change passives to actives in statements and questions, and 3) >> change tense in statements and questions. > >Whether or not anybody else agrees that these are the right desiderata, >it's useful that he's put them forward. We can use them to evaluate >our own work, and Bralich's work as well. We have done this, and >it seems to us that our system is superior to Bralich's. This is only an assertion until the functions are written that show this in a manner an outisder can judge. If you can do it at all then you know that each function can be written in one day. They should take the part of speech and part of sentence info and all the other info and arrange it in a way that is easy to read for all: For example: The man who mary likes is reading a book "The" is an article "man" is a noun "The man who mary likes is" is the subject of "is reading" "who" is the direct object of the verb "likes" This is a statement It is simple present active And so on Expecting the user to learn your theory before he can see these things is just asking too much. >The version of link grammar that we have put up on the web already does >very well in a number of these criteria. But not many. And not in a way that others can see or judge. > Regarding STRUCTURAL ANALYSIS, >the parser outputs a representation of a sentence which contains much of >the information discussed by Bralich. Parts of speech are shown >explicitly; things like constituent structure are virtually explicit >(for example, a subject phrase is anything that is on the left end of an >"S" link). Tense and aspect are not explicit in the output, but they >could quite easily be recovered. Then lets see it. Which is really all we are asking in this challenge. Regarding EVAULATION OF STRINGS, our >system is far superior to the Ergo parser. Our system does an excellent >job of distinguishing acceptable from unacceptable sentences. It is very difficult to see this. Certainly, many of the bad sentences we typed in looked like they parsed. More suspiciously, I typed in a number of ambiguous sentences and it returned only parse each. >Furthermore, it is often able to obtain useful structural information >from non-grammatical sentences, by making use of "null-links". I have no idea what this means as do many readers I suspect. >Below we >discuss some basic problems with the Ergo parser regarding its >evaluation and analysis of sentences. We have not implemented a >MANIPULATION OF STRINGS component. We have worked out a sentence >constructing mechanism that we believe would be able to handle this as >well. Of course we'll have to do the work to make this convincing. We >may be inspired to add this feature as a result of these discussions. This again is not at all clear. Make a function where we can see this. Until you show your ability to manipulate strings this is just an assertion. Also your ability to recognize a question/statement/command as well as tense and voice is not shown. Further whether a sentence is simple compound or complex is not mentioned. >Bralich's aim is to build a parser that will be useful for interactive >games and other applications. It is therefore restricted to short >sentences, and has a fairly small vocabulary. Our vocabulary of over 50,000 words is double his stated vocabulary of 25,000. Further, the smaller the dictionary the less problems there will be with ambiguity. We take the risk with the higher vocabulary and have no serious problems. We also handle large sentences like anyone else; however, on sentences with more than 14 or 15 words we cannot yet meet the standards of our challenge which is much more comprehensive than what is required by Mr. Sleator. We will be able to do this in two or three months for now we limit ourselves to sentence lengths that are amenable to easier applications. We can now do as much as Mr. Sleator with large sentences but that does not satisfy the challenge. When we can do all of our challenge they will also be on our web site. (This could be as early as two months from now). As it stands if the link parser were asked to do interactive games it is unclear whether he could do much at all as we do not know that he can make a question from a statement let alone answer one. Further until he clearly indicates his parsers ability to find subjects and objects and so on there is no way we can expect his parser to properly analyze questions or return appropriate responses. However, even with these >constraints, there are a number of very basic constructions that his >parser cannot handle. Here are some examples. All of the sentences below >are simply rejected by his parser. > I went out The parser does not allow two-word verbs > He came in like "set up", "go out", "put in", which are > He sent it off extremely common. > I set it up He did find some verbs with some problems, but in general we handle these. > He did it quickly The parser seems to have extremely limited > use of adverbs. (It does accept some > constructions of this type, like "He ran > quickly", so perhaps this is a bug.) We have not yet allowed such adverbs to attach sentence finally. It is a small task, but we have not yet done it. I will see if the programmer has time for this tomorrow. One extremely important plus for our parser is that it is easy to trouble shoot. All of the problems mentioned here will be fixed before the month is out. Many much sooner. > John and Fred are here The parser does not know that conjoined > singular noun phrases take plural verbs. This should also be fixed by tomorrow. I am sorry that some of what is happening here may seem like our beta testing. That is, comments such as these help us find and correct problems. Again we are not promising that we can do everything, merely that ours is the best and that it can be judged in an open forum. > The dog jumped and the The parser does not seem to > cat ran accept ANY sentences in which clauses > are joined with conjunctions. We only recently began adding coordinate structures. Complex phrases will be done soon, but the coordination of verb phrases (john read a book wrote a paper and took a test) is about six weeks away. > He said he was coming The parser accepts "He said THAT he was > coming"; but it does not allow deletion of > "THAT", which is extremely common with some > verbs We will add this. I was unaware that our subcategorization frame for "say" did not allow that. > I made him angry There are a number of kinds of verb > I saw him leave verb complements which the parser does > I suggested he go not handle: direct object + adjective > ("I made him angry"), direct object + > infinitive ("I saw him leave"), > subjunctive ("I suggested [that] he go"). Again we will add these and easily 90% of the errors that we encounter during this public announcement in very short time. > His attempt to do it The parser cannot handle nouns that take > was a failure infinitives. > I went to the store The parser cannot handle the extremely > to get some milk common use of infinitive phrases meaning > "In order to". I will add it. > >There are also cases where the parser assigns the wrong interpretation >to sentences. One of the biggest problems here is in the treatment of >verbs. Verbs in English take many different kinds of complements: direct >objects, infinitives, clauses, indirect questions, adjectives, object + >clause, and so on. The Ergo Parser seems to treat all of these >complements as direct objects, and makes no distinctions between which >verbs take which kind. This means, in the first place, that it will >accept all kinds of strange sentences like "I chased that he came", >blithely labeling the embedded clause as an object of "chased". More >seriously, this often causes it to assign the wrong interpretation to >sentences. For example, This is not true. Try it. > I left when he came >The verb "left" can be either transitive or intransitive. Here, it is >clearly being used intransitively, with "when he came" acting as a >subordinate clause. But the Ergo Parser treats "when he came" as a >direct object. Yes, again something we will add. As you will note, though we parse complex and compound sentences, we do not label parts of speech in those sentences. We will soon. Of course, you should note that we have no idea whether or not the link parser can label parts of the sentence at all. >The program does not seem to analyze relative clauses at all. In >the sentence > The dog I saw was black How can you say we do not analyze relative clauses when we get them exactly correct. You can say the dog I saw the dog that I saw the dog which I saw and recieve a correct and complete analysis. >the parser states that "I" is the subject of "saw", and that "The dog I >saw" is the subject of "was", which is exactly correct. >but does not state that "dog" is the >object of "saw". The program also accepts We could but we decided not to because we thought it would be confusing for the user. In a sentence like "the dog which I saw" or "the dog which I saw" we would label "THAT" or "WHICH" as the object of "saw" which is more accurate. We could also label "dog" as the object which has some sense to it as well, but we decided against it. You might try the more difficult "the man who mary likes is reading a book" or an example of your own. I know in the past we have had trouble with the sub- categorization frame for "saw" and I am not sure if we have fixed it or not. "The dog I died was black" >(analyzing it in the same way), further indicating that it simply has no >understanding of relative clauses. >In the sentence "How big is it", the program analyzes "how big" as the >subject of the sentence. Yes, "How" questions are still on the programmers desk. They will be there in about a week. >We were able to identify all these problems with the Ergo parser without >knowing anything about how it works -- the formalism used is >proprietary. A plethora of new problems would probably emerge if we >knew how it worked. And all of these problems will probably be >exacerbated with longer sentences. On the contrary, the fact that the link parser just gives an obscure tree rather than a user-friendly output, it is impossible to know if his parser does much of anything at all. >All of these problems with the Ergo Parser - constructions that it does >not accept, and things that it mis-analyzes - are things that our system >handles well. Indeed, the _original_ 1991 version of our parser could >handle all these things. In our version 2.0, released in 1995, we >incorporate many constructions which are less common. We should point >out that even the latest version of our parser is far from perfect. It >finds complete, correct parses for about 80% of Wall Street Journal >sentences. This is merely assertion. You simply must prerpare the functions that will show this to the user and put that on the web. Especially, you need to show that your parser can handle the manipulation of strings rather than just a bracketing. A bracketing is meaningless unless you can use it to do something with the langauge you are analyzing. No one really cares about statistical analyses of words or sentences what people need and want are applications that will allow them to use real language in real time with computer applications. There is nothing at your site that indicates your ability to do this. >The reader can try both systems for himself or herself, and come to >his/her own conclusions. (The Ergo parser is at www.ergo-ling.com, ours >is at www.cs.cmu.edu/~sleator.) Yes, please. > Daniel Sleator > Davy Temperley In sum, the purpose of our challenge is to allow the academic community and private sector an opportunity to see and judge for themselves what is possible in the area of the analysis of grammar. We proposed a set of minimum standards that are necessary to show that a parser is what we call "commercially viable." Until the link parser demonstrates its ability to meet these challenges in a way that anyone can see, we simply do not know that it is "commercially viable." Further, we did not claim that our parser was perfect. Just the best. And that we are willing to put it to the test. An imporant aspect of our parser is that it is easy to trouble shoot. Early next week we will go through all the sentences that were input during this challenge and then address the problems. This will take a few days to a couple of weeks depending on the problem. Please try these parsers and then try them again in a couple of weeks. I am sure you will agree that we have redifined what parsing is and can be. I also suspect that the link parser will not be able to meet the challenge any more in two weeks than it is now; that is, with a user friendly web site rather than assertion and obfuscation. Phil Bralich Philip A. Bralich, Ph.D. Presidend and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel:(808)539-3920 Fax:(808)539-3924 Sincerely, Anne Sing ERGO LINGUISTIC TECHNOLOGIES Manoa Innovation Center 2800 Woodlawn Drive, Suite 175 Honolulu, Hawaii 96822 TEL: (808) 539-3920 FAX: (808) 539-3924 From annes at HTDC.ORG Thu Jan 2 21:24:03 1997 From: annes at HTDC.ORG (Anne Sing) Date: Thu, 2 Jan 1997 11:24:03 -1000 Subject: Rapid Parser Repairs Message-ID: I didn't realize it but our head programmer was here last night (the holiday) and I fixed all but one or two of the sentences that Mr. Sleator said didn't work. Part of the problem was that our verb section of our dictionary on the web was corrupted. The important point being that is easy for us to update and repair problems with our parser. Something that most others cannot handle. That is in most cases, even minor repairs take months Phil Bralich Philip A. Bralich, Ph.D. President and CEO ERGO LINGUISTIC TECHNOLOGIES Manoa Innovation Center 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 TEL: 808-539-3920 FAX: 808-539-3924 From kuzar at RESEARCH.HAIFA.AC.IL Fri Jan 3 06:38:18 1997 From: kuzar at RESEARCH.HAIFA.AC.IL (Ron Kuzar) Date: Fri, 3 Jan 1997 08:38:18 +0200 Subject: Tone of Discussion Message-ID: Sorry to interfere, but the arrogant tone of discussion as initiated by Anne Sing is very annoying. I, for one, am very interested in trees (and other structural descriptions) and do not care at all if these trees can be afterwards utilized for commercial products. I do think that intellectual challenges may be launched and may benefit the thinking community, so could you please calm down and spare us the show. Roni --------------------------------------------------------------- | Dr. Ron Kuzar | | Office address: Department of English Language and Literature | | Haifa University | | IL-31905, Haifa, Israel | | Office fax: +972-4-824-0128 (attention: Dept. of English) | | Home address: 17/6 Harakefet St. | | IL-96505, Jerusalem, Israel | | Telephone: +972-2-641-4780 (Local 02-641-4780) | | E-mail: kuzar at research.haifa.ac.il | --------------------------------------------------------------- From bralich at HAWAII.EDU Fri Jan 3 18:07:41 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Fri, 3 Jan 1997 08:07:41 -1000 Subject: Tone of Discussion Message-ID: At 08:38 PM 1/2/97 -1000, Ron Kuzar wrote: >Sorry to interfere, but the arrogant tone of discussion as initiated by >Anne Sing is very annoying. I, for one, am very interested in trees (and >other structural descriptions) and do not care at all if these trees can >be afterwards utilized for commercial products. I do think that >intellectual challenges may be launched and may benefit the thinking >community, so could you please calm down and spare us the show. These issues are more important than you letter seems to indicate. Parsing technology is an important area for the future of computational linguistics. A proper understanding of syntax is crucial to this endeavor. However, the creation of tress alone is not enough. The trees are meant to illustrate generalizations about language based on a particular theory of syntax. If the trees cannot be used to manipulate sentences or label parts of the sentence such as subjects and verbs, then the theory is not adequate. The real test is not the creation of trees. The test is if these trees allow you to analyze and manipulate language in a significant way thereby demonstrating the efficacy of the theory. As for the tone, I honestly don't see it to be much different than about 90% of what occurs on this or other lists. Certainly the tone of your message seems problematic. It is, in many cases, necessary to make a point in a world that is increasingly dominated by this sort of rhetoric. My participation in it is reluctant. However, if I did not my arguments would be lost. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From PDeane at DATAWARE.COM Fri Jan 3 18:11:00 1997 From: PDeane at DATAWARE.COM (Paul Deane) Date: Fri, 3 Jan 1997 13:11:00 -0500 Subject: FW: Parser Challenge Message-ID: After reading the recent postings on FUNKNET about the parser challenge, I went to the Ergo parser site and tried it out. I was particularly interested since I have worked with the Link Grammar parser extensively, and other parsers, and so I have a pretty good idea what the state of the art looks like. The functionality built into the Ergo interface is very nice: certainly it is an advantage, for the purposes of evaluating parsers, being able to get the grammatical analysis outputted directed in a simple and easily understood format. And such functionalities as getting transformational variants of sentences (especially question-answer pairs) is of obvious commercial benefit. (Though there are certainly other sites with such functionality. Usually, though, that is something built for a particular application on top of a parser engine, rather than being built into the parser. It would be nice as a standard parser feature though.) Leaving that aside, I found the performance of the Ergo parser substantially below state of the art in the most important criterion: being able to parse sentences reliably - at least, judging by the web demo (though there are some risks in doing so, of course, since it is always possible that performance problems are the result of incidental bugs rather than the fundamental engine or its associated database.) Quite frankly, though, the self-imposed limitation of 12-14 words concerned me right off the bat, since most of the nastiest problems with parsers compound exponentially with sentence length. But I decided to try it out within those limitations. As a practical test, I took one of the emails sent out from Ergo, and tried variants of the sentences in it. By doing this, I avoided the trap of trying simple garden-variety "example sentences" (which just about any parser can handle) in favor of the variety of constructions you can actually get in natural language text. But I reworded it slightly where necessary to eliminate fragments and colloquialisms and to get it into the 12-14 word length limit. That meant in most cases I had to try a couple of variants involving parts of sentences, since most of the sentences in the email were over the 12-14 word limit. Here were the results: I didn't realize it but our head programmer was here last night. -- did not parse I fixed the sentences that Mr. Sleator said didn't work. -- failed to return a result at all within a reasonable time; I turned it off and tried another sentence after about ten minutes. Our verb section of our dictionary on the web was corrupted. - parsed in a reasonable time. Part of the problem was that our dictionary was corrupted. - took 74.7 seconds to parse It is easy for us to update and repair problems with our parser. -again, it failed to return a result in a reasonable time. This is something that most others cannot handle. -did not parse. Even minor repairs take months. -again, it failed to return a result in a reasonable time. I am not particularly surprised by these results. Actual normal use of language has thousands of particular constructions that have to be explicitly accounted for in the lexicon, so even if the parser engine Ergo uses is fine, the database could easily be missing a lot of the constructions necessary to handle unrestricted input robustly. Even the best parsers I have seen need significant work on minor constructions; but these sentences ought to parse. They are perfectly ordinary English text (and in fact all but one parses in a less than a second on the parser I am currently using). No doubt the particular problems causing trouble with these sentences can be fixed quickly (any parser which properly separates parse engine from rule base should be easy to modify quickly) but the percentage of sentences that parsed suggests that there's a fair bit of work left to be done here. From bralich at HAWAII.EDU Fri Jan 3 20:32:09 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Fri, 3 Jan 1997 10:32:09 -1000 Subject: FW: Parser Challenge Message-ID: At 08:11 AM 1/3/97 -1000, Paul Deane wrote: >After reading the recent postings on FUNKNET about the parser challenge, >I went to the Ergo parser site and tried it out. I was particularly >interested since I have worked with the Link Grammar parser extensively, >and other parsers, and so I have a pretty good idea what the state of >the art looks like. > >The functionality built into the Ergo interface is very nice: certainly >it is an advantage, for the purposes of evaluating parsers, being able >to get the grammatical analysis outputted directed in a simple and >easily understood format. And such functionalities as getting >transformational variants of sentences (especially question-answer >pairs) is of obvious commercial benefit. (Though there are certainly >other sites with such functionality. Usually, though, that is something >built for a particular application on top of a parser engine, rather >than being built into the parser. It would be nice as a standard parser >feature though.) THis is the main point of our challenge. We chose these criteria because they demonstrate to anyone the ability to do the basic tasks that underly any real-world parsing job: name part of speech, part of sentence, tense, sentence type, internal clauses and so on. Merely claiming these abilities or making them visible only to those who know the theory is not enough really. >Leaving that aside, I found the performance of the Ergo parser >As a practical test, I took one of the emails sent out from Ergo, and >tried variants of the sentences in it. By doing this, I avoided the trap >of trying simple garden-variety "example sentences" (which just about >any parser can handle) in favor of the variety of constructions you can >actually get in natural language text. But I reworded it slightly where >necessary to eliminate fragments and colloquialisms and to get it into >the 12-14 word length limit. That meant in most cases I had to try a >couple of variants involving parts of sentences, since most of the >sentences in the email were over the 12-14 word limit. This is a somewhat odd set of sentences to begin with though not completely unfair. We are suggesting that the problem in parsing is that most people are not handling anything properly. That is most cannot handle the analysis of small or medium sentences properly. So while the sentences you put in may be at our current upward length (partially because our dictionary is only 60,000 words in size). Still we have no idea that any other parser can do a full parse of small and medium sentences. The point of the challenge is to establish very tough criteria and then work with it from smaller to medium to larger sentences. The sentences input in this test will be working in just a few weeks, but no other parser meets our challenge for small or medium size sentences. We need to look at all parsers for all these criteria from small to large. By the way, our current development will allow us to take large steps forward every two months for the next year. After that we should level out. The main points being this: 1. All parsers should be held to the task of labelling parts of speech, parts of the sentence, sentence type, and tense and voice as well as being able to manipulate strings: change actives to passsives and statements to questions and so on. This after all is what parsing is. Creating trees is a preliminary step toward formulating these generalizations about the syntax of the language you are analyzing. 2. These criteria should be held for small medium and large sentences. 3. As our parser improves we will hold to these criteria for all size sentences. 4. As it is only our parser can do all this for sentences of ANY size. The claims of other parsers are merely assertions until they provide these functions on a web site that all can see. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From cumming at HUMANITAS.UCSB.EDU Fri Jan 3 21:13:45 1997 From: cumming at HUMANITAS.UCSB.EDU (Susanna Cumming) Date: Fri, 3 Jan 1997 13:13:45 -0800 Subject: Parsing Message-ID: Folks, I'm somewhat surprised that this parsing discussion is taking place on Funknet. Analysis of isolated "sentences" into labeled trees without reference to contextual factors is something that human beings never do (in fact we never even encounter such objects), so why should we be interested as functional linguists in whether a computer can do it? Paul Deane's point that the range of constructions encountered in everyday language use is still much wider than has yet been accomodated in any actual grammatical description, on or off-line, is well-taken. Moreover, he takes his examples from written language; far greater yet is the range encountered in everyday informal speech -- the only universal form of language use. This is the real test if we are interested in language processing as scientists rather than as software engineers. While I for one am certainly in favor of in computer modelling of natural language production and understanding as a tool for testing our hypotheses about the way contextual factors and linguistic form interact, for me such attempts are only interesting to the extent that they reflect actual human behavior in natural communication situations. This is not of course to denigrate the commercial potential of sentence-level parsing, a matter the marketplace can decide. Susanna Cumming From dryer at ACSU.BUFFALO.EDU Sat Jan 4 09:21:50 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Sat, 4 Jan 1997 04:21:50 -0500 Subject: parsing Message-ID: I second Susanna Cumming's surprise that this parsing discussion is taking place on Funknet, as well as her other comments, but want to add a few additional comments. Quite apart from the major issue of context, there are a number of other ways in which the discussion seems a number of steps removed from what people do when they parse sentences. The notion that the output of a parse is a syntactic tree is odd. The "real" output is some sort of meaning. It may very well be that parsing the sentence syntactically plays a major role in allowing people to determine the meaning, but that doesn't mean that an entire tree is produced in the process. Many parsers for computer programming languages parse computer programs syntactically, in that they identify the syntactic structure, but this is only because of the extent to which certain aspects of meaning are associated with syntactic structure, and identifying these aspects of syntactic structure are crucial in determining the intended meaning. But even when they do, they do not construct syntactic trees for the program; they construct a representation of the meaning. Parsing is only of interest if the output is some sort of meaning. I thus take issue with the claim >>All parsers should be held to the task of labelling parts of >>speech, parts of the sentence, sentence type, and tense and >>voice as well as being able to manipulate strings: change >>actives to passsives and statements to questions and so on. >>This after all is what parsing is. This has little to do with what parsing is, if by parsing we are referring to something that people do. Speakers of any language can parse sentences in their language without being able to label parts of speech or other grammatical features. Nor do they need to be able to change actives into passives. What they need to be able to do is parse active and passive sentences and come up with the same denotative meaning. Being able to change actives to passives is not necessary for this. Nor is this just a terminological issue about what "parsing" is. If part of the test of a syntactic theory is its ability to parse sentences, then part of the test of a syntactic theory is how successful it is to assigning meanings in context, either as part of the theory itself, or in terms of its interaction with other systems. Thus the idea that parsing simply involves producing trees is reminiscent of the sort of modular view of syntax that most functionalists reject. When someone tells me about a web site at which one can have conversations with a computer program attached to a knowledge database, even if the area of knowledge is very limited, and the vocabulary very limited, and only simple syntactic structures permitted, then I'll be interested. Matthew Dryer From ellen at CENTRAL.CIS.UPENN.EDU Mon Jan 6 02:38:42 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Sun, 5 Jan 1997 21:38:42 EST Subject: parsing In-Reply-To: Your message of "Sat, 04 Jan 1997 04:21:50 EST." Message-ID: i'm quite perplexed by matthew dryer's claim that people don't parse they just interpret... how do you envision going from the phonological or graphological (?) string to an interpretation without parsing? what about ambiguous sentences like: they can fish they saw the man with the telescope or the garden path types like: the horse raced past the barn fell or in fact any other sentence you can think of... obviously, no one assumes that people need to be able to consciously label items with parts of speech etc -- but clearly they know what's what -- and they would have to come out with SOMETHING that would be formally equivalent to a syntactic parse, it would seem. no??? btw, i'm in no way offering this as support for the parser hype we've just seen! From cleirig at SPEECH.SU.OZ.AU Mon Jan 6 03:19:26 1997 From: cleirig at SPEECH.SU.OZ.AU (Chris Cleirigh) Date: Mon, 6 Jan 1997 14:19:26 +1100 Subject: parsing Message-ID: The recent discussion on parsing suggests the possibility that "new functional is but old formal writ large" in America. chris From dryer at ACSU.BUFFALO.EDU Mon Jan 6 04:57:57 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Sun, 5 Jan 1997 23:57:57 -0500 Subject: Response to Ellen Prince Message-ID: Ellen Prince says >>i'm quite perplexed by matthew dryer's claim that people don't parse >>they just interpret I did not intend to imply this. I do believe that people parse, and that part of the process involves identifying syntactic structure. My point is that the process of parsing involves the identification of syntactic structure only as a means for determining the meaning, that the identification of syntactic structure per se is of no value as an end in itself, and that the real test of a parser is its ability to identify meaning in context, something that can only be tested by incorporating it in a system that can engage in conversation. In fact, not only do I believe that part of the process of parsing involves the identification of syntactic structure, but I believe that that is why languages have syntax. Some functionalists question the reality of syntax, largely, I suspect, because they don't see what function it would serve. Under my view, syntax makes the process of interpreting sentences easier than it otherwise would be, because the identification of syntactic structure assists in identifying the meaning. While I believe that pragmatic inference also plays a very significant role in interpretation, syntax allows some of the task of interpretation to be done in a simpler more automated fashion, reducing what might otherwise be an overwhelming demand on pragmatic inference. Matthew Dryer From lakoff at COGSCI.BERKELEY.EDU Mon Jan 6 06:07:25 1997 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Sun, 5 Jan 1997 22:07:25 -0800 Subject: Response to Ellen Prince Message-ID: If this were taking place on cogling, the discussion would be rather different. Given advances in cognitive semantics, a cognitive grammar (according to lancker, myself, and others, though not all cognitive linguists) uses no formal syntax at all. Rather is contains constructions that are direct pairings between aspects of cognitive semantics and their phonological expression in a given language. The old syntactic categories are semantic categories (in some cases radial categories). The old fashioned syntactic hierarchical structure is semantic hierarchical structure, and so on, as discussed in the various cognitive grammar literature. Since not everything has been analyzed in these terms yet, one cannot claim thatwe know this can be done, and some folks are more optimistic than others. But the question would be how to handle recalcitrant cases. Most people would agree that a "parse" yields an embodied cognitive semantic characterization of meaning as an output. In the evolving field of neural cognitive semantics (coming out of ICSI), we would have more stringent criteria for a "parser" -- that it be neurally realistic, done in structured connectionism, obey the hundred step rule, be performable in real time, be learnable, have an embodied semantics, be able to deal with blends (a la Fauconnier and Turner) and with metaphor, use plausible neural binding techniques, be able to deal with garden path sentences, be able to derive correct contextually appropriate inferences, and on and on. The general point, of course, is that what a "parse" is depends on the field you're in and on what assumptions you're making about what linguistics is. For this reason, a parser challenge doesn't make much sense, unless everybody agrees upon their theoretical assumptions, which doesn't seem to be the case in the funknet group. This is not to denigrate anybody's "parsing" efforts given whatever assumptions about linguistics they happen to like. There are few enough linguists and few enough people working on serious parsing efforts from whatever theoretical perspective that we ought to welcome all efforts, however diverse in theoretical perspective they seem to be. However, since this is not happening on cogling, I'll yield my two cents and leave the discussion to core funknetters. Happy New Year To All! George From Carl.Mills at UC.EDU Mon Jan 6 13:50:30 1997 From: Carl.Mills at UC.EDU (Carl.Mills at UC.EDU) Date: Mon, 6 Jan 1997 08:50:30 -0500 Subject: Reply to Matthew Dryer's reply to Ellen Prince Message-ID: In his reply to Ellen Prince Matthew Dryer says "My point is that the process of parsing involves the identification of syntactic structure only as a means for determining the meaning, that the identification of syntactic structure per se is of no value as an end in itself, and that the real test of a parser is its ability to identify meaning in context, something that can only be tested by incorporating it in a system that can engage in conversation." Whenever anyone uses the word *meaning* in a serious linguistic discussion, I put my hand on my wallet. Back in the 1930s, Ogden and Richards wrote an entire book called *The Meaning of Meaning* in which they concluded, if I remember correctly, that they thought maybe they didn't know. In other words, the word *meaning* has a meaning that is both so broad and so vague as to render *meaning* well nigh empty of empirical content. If we are going to use meaning to decide issues relating to syntax, including whether syntax exists, we need to agree on what we mean by *meaning*. Carl Mills From nuyts at UIA.UA.AC.BE Mon Jan 6 14:33:29 1997 From: nuyts at UIA.UA.AC.BE (Jan.Nuyts) Date: Mon, 6 Jan 1997 15:33:29 +0100 Subject: updated announcement bookseries Message-ID: *** Call for unpublished manuscripts *** - monographs or collected volumes - *** for a new book series *** HUMAN COGNITIVE PROCESSING An interdisciplinary series on language and other mental faculties Editors: Marcelo Dascal (Tel Aviv University) Raymond Gibbs (University of California at Santa Cruz) Jan Nuyts (University of Antwerp) Editorial address: Jan Nuyts University of Antwerp, Linguistics (GER) Universiteitsplein 1 B-2610 Wilrijk, Belgium e-mail: nuyts at uia.ua.ac.be Editorial Advisory Board: Melissa Bowerman (Psychology, MPI f. Psycholinguistics); Wallace Chafe (Linguistics, Univ. of California at Santa Barbara); Philip R. Cohen (AI, Oregon Grad. Inst. of Science & Techn.); Antonio Damasio (Neuroscience, Univ. of Iowa); Morton Ann Gernsbacher (Psychology, Univ. of Wisconsin); David McNeill (Psychology, Univ. of Chicago); Eric Pederson (Cogn. Anthropology, MPI f. Psycholinguistics); Fran‡ois Recanati (Philosophy, CREA); Sally Rice (Linguistics, Univ. of Alberta); Benny Shanon (Psychology, Hebrew Univ. of Jerusalem); Lokedra Shastri (AI, Univ. of California at Berkeley); Dan Slobin (Psychology, Univ. of California at Berkeley); Paul Thagard (Philosophy, Univ. of Waterloo). Publisher: John Benjamins Publishing Company, Amsterdam/Philadelphia Aim & Scope: HUMAN COGNITIVE PROCESSING aims to be a forum for interdisciplinary research on the cognitive structure and processing of language and its anchoring in the human cognitive or mental systems in general. It aims to publish high quality manuscripts which address problems related to the nature and organization of the cognitive or mental systems and processes involved in speaking and understanding natural language (including sign language), and the relationship of these systems and processes to other domains of human cognition, including general conceptual or knowledge systems and processes (the language and thought issue), and other perceptual or behavioral systems such as vision and non-verbal behavior (e.g. gesture). `Cognition' and `Mind' should be taken in their broadest sense, not only including the domain of rationality, but also dimensions such as emotion and the unconscious. The series is not bound to any theoretical paradigm or discipline: it is open to any type of approach to the above questions (methodologically and theoretically) and to research from any discipline concerned with them, including (but not restricted to) different branches of psychology, artificial intelligence and computer science, cognitive anthropology, linguistics, philosophy and neuroscience. HUMAN COGNITIVE PROCESSING especially welcomes research which makes an explicit attempt to cross the boundaries of these disciplines. PLEASE SEND IN A RESUME BEFORE SUBMITTING THE FULL MANUSCRIPT ***** Jan Nuyts phone: 32/3/820.27.73 University of Antwerp fax: 32/3/820.27.62 Linguistics email: nuyts at uia.ua.ac.be Universiteitsplein 1 B-2610 Wilrijk - Belgium From nick at STL.RESEARCH.PANASONIC.COM Mon Jan 6 19:20:27 1997 From: nick at STL.RESEARCH.PANASONIC.COM (Nicholas Kibre) Date: Mon, 6 Jan 1997 11:20:27 -0800 Subject: collecting email Message-ID: Hi all; I'm working on a project which requires me to analyze text patterns in email, and am trying to put a corpus. If anyone out there would be interesting in contributing some old messages sitting in their mailbox, please forward them to: stltalk at stl.research.panasonic.com Somewhere a few months down the road, I should be able to report the results of this project. Anyway, any contributions would be much appreciated! Happy new year, Nick Kibre UC Santa Barbara Linguistics and Panasonic Speech Tech Lab nick at stl.research.panasonic.com From TGIVON at OREGON.UOREGON.EDU Tue Jan 7 03:31:02 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Mon, 6 Jan 1997 19:31:02 -0800 Subject: Get real, George Message-ID: 1-7-97 Dear FUNK people, I was going to hold my peace on the parser issue, which I suspect has provided all of you with much merriment, having noted that Matt Dryer basically got it right: Neither the extremist position of the Syntax-for-the-love-of-syntax Chomskyites, nor the extremist position of syntax-doesn't-exist it's-all-lexicon-and-discourse Functionalists are really consonants with the facts of language as we all know them. In other words: Yes, Virginia, there is syntax. But no, Virginia, it is not there as an autonomous, non-adaptive flower of a genetic megamutation. Honest, I was going to bite my tongue and leave be this time -- till I read George Lakoff condescending, gratuitous intervention. Even then, I would have still preferred that someone else do the honors. But then it dawned on me that perhaps I am in the best position than most to call George's bluff. You see, I've been watching George in action for thirty years now (La Jolla, Spring of 1967, "Is deep structure necessary?" boy the years sure fly). And over the years I have seen George get away with similasr fetes by remarkably similar means. That is, by insinuating that somehow the rest of y'all ignorant slobs better join his elect group of with-it cognoscienti or else you'll miss the (latest) boat. So let us see what it would means to the many of us who have been looking at grammar/syntax from a variety of perspectives if it now turned out tha syntax does not really exist. And that at any rate, it plays no functional role in mediating between the cognitive-lexical- communicative levels and the phonetic output. To wit: 1. GRAMMATICALIZATION AND SYNTACTIC CHANGE: We have been describing how pre-grammar or non-grammar (lexicon cum parataxis) changes into grammar (syntax, morphology), how lexical items grammaticalized into morphology, how clauses that used to come under separate intonation contours somehow condense themselves under a single contour. We thought we were studying something real, poring over successions of older texts in arcane languages, wretling with internal reconstruction in unwritten languages. But we've been deluding ourselves all along -- says George. Forget it, good non-with-it FUNK folks. It's not really there. And if it is, no matter, it DOES NOTHING. 2. ACQUISITION: We have been describing how children move gradually from pre-syntactic (pre-grammatical) communication to grammaticalized communication, adding a morpheme here, an embedded construction there; stabilizing rigid grammatical(ized) word-order where previously only semantically-based (AGT oriented) or pragmatically-based (TOPIC oriented) word-order could be found; gaining embedded constructions, gaining de- transitive clauses, gaining fancy subject-inversion with auxiliaries; etc. etc. Forget it, folks, -- says George. It's been all for naught, what you've beel studying is plainly a mirage. And just in case it did exist, it doesn't matter. Because -- it turns out -- it serves no function whatever. "We", the cognoscienti, have already "demonstrated" that we "can do it all" without grammar. Nice try, George. But how come the kids are still insisting on doing what they're doing? Are they, like us, deluded too? 3. PIDGINS AND CREOLES: We have been observing for a long time the peculiar consequences of having no morpho-syntax; that is, of pre- grammatical (pidgin) communication. The halting, repetitious, error- prone, frustrating communicative mode of pidgin is familiar to many of us from early childhood studies. Doesn't Sue Erwin-Tripp work at UC Berkeley? Doesn't Dan Slobin? Isn't Ron Scollon's dissertation still available? Or Liz Bates? Or Eli Ochs' early works? Or Developmental Pragmatics (1979)? How about the extensive literature on Broca's Aphasia communication? Lise Menn's/Loraine Obler's magnificent 3-volume collection? And isn't the data of second language pidgin available? What is it that makes grammaticalized language -- such as the Creoles of children of Pidgin-speaking parents -- so much more fluent, fast-flowing, streamlined? What has been subtracted between the Creole and the Pidgin? According to George, nothing. But just in case it was something, forget it too. It serves no purpose. "We" can do without it. Well, as a person who have gone through the agony of moving from pidgin to grammaticalized commu- nication five distinct times (and hated it with passion every time...), I'd like to know why nobody had ever told me that I was wasting my precious time? That I was much better off mapping directly from cognitive to phonetic structure? Might have saved me years of toil and agony. 4. CROSS-LANGUAGE TYPOLOGICAL VARIATION: We have been observing how the very same cognitive-semantic-communicative function can be executed in different languages by a (relatrively small, mind you) number of syntactically distinct constructions. We've also noted that those cons- tructions represent distinct diachronic pathways of grammaticalization. We've seen this with complementation, with relative clauses, with passives, inverses, anti-passives, clause-chaining types, tense-aspect-modal systems, negation -- you bloody name it, we've been observing it. But, sorry you simpleton FUNK folks. George says that -- it turns out -- what y'all been documenting so laboriously is -- right, folks -- nothing. And if indeed it is still something, nevermind; because you see, it is there for no purpose whatever. "We" can do without it. 5. DISCOUSE: Here's the real bad news, folks. You've been studying for 30 years now how syntactic/grammatical constructions have specific communicative function paired with them, systematically, intimately. What a cxolossal waste of energy. You see --it turns out, George says -- that morpho-syntax doesn't really exist. What you should have been really studying all along, it turns out (you hear this -- Wally? Liz? Sandy? Russ? Jack? Barbara? Matt? John? Bob? Anna?) is how communicative function maps directly onto phonetic structure. Directly folks, directly. 6. COGNITIVE PSYCHOLINGUISTICS: Forget it Russ Tomlin, sorry, Morti Gernsbacher, your loss, Brian MacWhinney, butt of, Walter Kitsch, you're past, Tony Sanford, get lost, Liz Bates. All your labor has been for naught. George has decreed your experiments null and void, whatever it is your studying just doesn't exist. Or worse, you simpleton fukn-folks, it is there for no purpose. 6. NEUROLOGY: We all know localization is a complicated, that grammar in adults is distributed across many "modules". Sure, the modules bear little resemblance to their Jerry Fodor name-sakes. They interact, they talk to each other, they are NOT encapsulated, they collaborate with "cognitive" modules (attention, activation, memory, intention, pragmatic zooms, etc.). But however widely distributed, portions of this complex mechanism can be knocked out selectively by lesions. What is it, George, that aphasics have lost, exactly? You study their transcribed discourse (Menn and Obler, eds 1990, e.g.), and you notice that (i) the lexicon is there. nouns, verbs, adjectives. (ii) the coherence of discourse is still there (referential coherence, temporal coherence, all the measurables). So what is it that is NOT there? We used to think it was morphology and syntactic constructions. But it turns out, George now says, no go. Whatever we thought it was is not really there, never really was to begin with. And if for some reason it turns out it was, still no matter; it performed no function. So, one wonders, how come the poor slobs in the wards are having such a hard time stringing lexical items together into clauses and clauses into discourse? The most gratuitous insult, I must confess, is George's reference to "neural networks" and connectionism. This is a rather poor substitute for real neurology, which is vast, complex, frustrating and cannot be practiced by erzatz experts in search of the latest fad. When I ask the real neurologists I know what they think of connectionism, I get an incomprehension response. Never heard of it. Con what? So the exhortation to join the bandwagon before it leaves the station and we're stranded for good rings rather hollow. Especially this undignified business of "in real time". So, if nobody else has yet, I guess I must tell George that the "thing" that makes it possible for humans to process language at the rate of, roughly, 250 msecs per word and 1-3 seconds per idea (clause, proposition, event/state frame, intonation unit...) is called grammar/syntax. 7. EVOLUTION: I have saved this one for last, since in some funny way it remains the crux of the matter. Here's the real puzzle: Why should this manifestly existing, acquired, diachronically-changing, cognitively manipou- lable, neurologically-based entity called "grammar/syntax", with its complex, imperfect but nonetheless clearly manifest ICONICITY to semantic and pragmatic function(s) -- why should it ever evolve? With its marvelous hierarchic design, parts fitting into larger parts; why should this extravaganza ever evolve in the first place? According to George, the extravagance doesn't really exist. Presumably then, there's no difference between lexicon and morphology (forget what you've been observing, you gramma- ticalization hounds); no difference between parataxis and syntax (again, forget your puny facts you poor un-enlightened souls, you un-cogged simpletons). But, just in case it did exist -- nevermind. It's there for no reason, it just happened to evolve, somehow, for the love of God or Descartes. For those of us with ears that are yet undulled by the clamor to be with-it, the strage ghost deja-vu is now creeping in: Hey, but this is what Chomsky has been saying all along about language evolution -- that it is a mysterious saltation, unguided by adaptive (communicative, cognitive) behaviour. I have probably said too much already. I always live to regret getting involved in these silly affairs. But I think if there is one lesson to be learned from this, it is perhaps that nothing comes cheap in real science. You can't do complex biologically-based science on the fly. If one intends to talk to only God or yourself or the Elect, that's certainly one's privilege. But if you want to be taken seriously, well get serious first. Like, get real. Happy New Year y'all, TG From bresnan at CSLI.STANFORD.EDU Tue Jan 7 04:16:04 1997 From: bresnan at CSLI.STANFORD.EDU (Joan Bresnan) Date: Mon, 6 Jan 1997 20:16:04 -0800 Subject: Get real, George In-Reply-To: Your message of Mon, 06 Jan 1997 19:31:02 PST. <01IDWK5CL2KI935AF8@OREGON.UOREGON.EDU> Message-ID: TG: Bravo! Normally I just lurk here, but I can't help observing how much of what TG says in defense of grammar/syntax and its rich functionality I agree with. --Joan From dever at VERB.LINGUIST.PITT.EDU Tue Jan 7 13:33:29 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Tue, 7 Jan 1997 08:33:29 -0500 Subject: Get real, George In-Reply-To: <199701070416.UAA21681@Turing.Stanford.EDU> Message-ID: I must agree completely with Joan's evaluation of TG's posting. Absolutely a great posting. I find Tom's position one of the most attractive in linguistics - one that Pike and Hockett and others argued for convincingly as well. Language is a form-function composite (to use a phrase of Pike's). Both the form and function parts are rich. DLE ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From PDeane at DATAWARE.COM Tue Jan 7 14:31:00 1997 From: PDeane at DATAWARE.COM (Paul Deane) Date: Tue, 7 Jan 1997 09:31:00 -0500 Subject: Get real, George Message-ID: Whoa, folks! Why do we have to get polarized here? Call me naive if you like, but I didn't see George's post as denigrating functional work. And I certainly don't see putting words into his mouth as fair. He specifically said what things he would like to see. Granted, it's not what a lot of people on FUNKNET want to focus on, but does that turn it automatically into an attack? Do we have to base our discussions on personalities? More to the point, if people feel they have to take such a post as an attack, it would help if the discussion were moved to a level that would generate light and not heat. I have a certain stake in the matter since I've written a book couched in a "cognitive" framework (Grammar in Mind and Brain: Explorations in Cognitive Syntax, Mouton de Gruyter 1993) in which I refer to and attempt to incorporate practically everything on Tom Givon's list ... aphasia ... experimental psycholinguistics .... cross-linguistic typological hierarchies ... lots of things. I don't see a contradiction. And I don't see the point of making this into a zero sum game. I've seen enough flame discussions elsewhere. Please, let's not have one here. From TWRIGHT at ACCDVM.ACCD.EDU Tue Jan 7 15:39:20 1997 From: TWRIGHT at ACCDVM.ACCD.EDU (Tony A. Wright) Date: Tue, 7 Jan 1997 09:39:20 CST Subject: Get real, George Message-ID: Paul Deane wrote: > I've seen enough flame discussions elsewhere. Please, let's not have one > here. I agree that flames per se are not helpful, but I was quite thrilled, to be honest, with the actual discussion of a linguistic issue on FUNKNET after months of nothing but conference announcements. The only thing that kept me from unsubscribing was how easily forgotten my subscription to FUNKNET was due to the low volume of mail. I have been trying to get a similar sort of discussion (but more civil) going on the GB2MP list for Chomskyan syntax, which I moderate. It has been an uphill battle, but we have managed to have some very interesting and substantive back-and-forth about linguistic issues (as opposed to calls for papers and conference announcements). Our list traffic remains fairly light, however. I was thrilled to see discussions of the merits of parsers (and more thrilled still to see the automony of syntax) discussed in this forum I'd like to do anything I can to encourage an on-line scholarly dialogue between linguists (within norms of civility). Tony Wright Moderator, GB2MP, a list for issues in Chomskyan syntax To subscribe, send the message: subscribe gb2mp to the address: majordomo at colmex.mx From jrubba at HARP.AIX.CALPOLY.EDU Tue Jan 7 18:43:32 1997 From: jrubba at HARP.AIX.CALPOLY.EDU (Johanna Rubba) Date: Tue, 7 Jan 1997 10:43:32 -0800 Subject: Get real, George In-Reply-To: Message-ID: I've been enjoying the back-and-forth of this discussion, but could someone please enlighten me as to why I am getting all these messages twice?? As all you busy people know, every second counts -- including the 2-3 secs. it takes to delete that extraneous repeat message ;-) Happy New Year all; this list has awakened with a bang! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Johanna Rubba Assistant Professor, Linguistics ~ English Department, California Polytechnic State University ~ San Luis Obispo, CA 93407 ~ Tel. (805)-756-2184 E-mail: jrubba at oboe.aix.calpoly.edu ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From lakoff at COGSCI.BERKELEY.EDU Tue Jan 7 20:00:40 1997 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Tue, 7 Jan 1997 12:00:40 -0800 Subject: Biological and cognitive realism Message-ID: Dear Tom, I'll start with something I forgot to mention. Dan Jurafsky's parser (see his Berkeley thesis) is set up to accord with the psycholinguistic data on language processing. It uses constructions that include semantic and pragmatic information. Those not familiar with Dan's work should be. In addition, there's going to be a conference at ICSI at Berkeley after Cogsci next August on psycholinguistically-based language processing. Dan JUrafsy and Terry Regier are running it. The point is that there is a community of computational linguists who have higher standards than just getting tree parses. Now back to your posting. Tom, I said grammar does exist. As has Langacker. What we say is that it exists as constructional pairings of cognitive semantics and phonological representations (word order included). The exact nature of grammar is an empirical question. We argue our position empirically. See, for instance, my CLS 86 paper on the frame semantic control of the coordinate structure constraint, where I argue that there is no autonomous syntactic coordinate structure constraint. If you want some details, take a look at Ron's Foundations of Cognitive Grammar (about 1000 pages of details), as well as the rest of the literature in the field. If you want a real short intro, start with Ron's Concept, Image and Symbol. Also, I can't imagine how you could possibly have read the last 120 pages of Women, Fire and Dangerous Things -- the most extensive study of there-constructions ever done by far -- and still think I don't believe in grammatical constructions. Try reading it for evidence for constructions and against autonomous sytax. What I said was that AUTONOMOUS syntax didn't exist. That's a very very different claim than that grammar does not exist, which is ridiculous. It seems to be obviously true that syntax is not purely autonomous. If syntax were autonomous, it could take into account no input from anything else, such as meaning, general cognition,perception, processing considerations, etc. In short, it would be as Chomsky has always represented it, a grammar box with no input, only output. This is necessary if Chomsky's basic metaphor is to be accepted, namely, that a sentence is a sequence of formal symbols, a language is a set of such sequemces, and grammars are devices for generating such sets. The theory of formal grammars requires that rules be stated only in terms of symbols in the grammar. In short, no external input that is not in the formal language can be looked at by the rules of a formal grammar. Such a view, if it is to be instantiated in the brain would require a brain module (or some complex widely distributed neural subnetwork) WITH NO INPUT! But there is nothing in the brain with no input. Such a view is biologically impossible. What we have proposed instead is that grammar is based on other nonlinguistic cognitive abilities and that neural connections and bindings bring about grammar. We try to show exactly how. I suggest you read the discussion by Gerald Edelman in Bright Air, Brilliant Fire called "Language: Why The Formal Approach Fails", starting on p. 241 for a neuroscientist's view of the issue. Edelman is director of the Neuroscience Institute at Scripps in La Jolla. Next, I was not attacking functionalists at all. I happen to like and teach much functionalist research. I think functionalism has contributed a great deal. Indeed, I was not attacking anybody. As I said, I think there are few enough linguists doing serious research in any school, and despite theoretical differences, lots of folks have lots of things to teach us. > >1. GRAMMATICALIZATION AND SYNTACTIC CHANGE: Cognitive linguists have been working on grammaticalization and syntactic change for quite a while. But it requires a nonautonomous view of grammar. Traugott has written for decades on the need for pragmatics and metonymy is accounting for grammaticalization. Kemmer has argued overwhelmingly that semantics (of a cognitive rather than formal nature) is required for grammaticalization. Heine has been arguing that conceptual metaphor is necessary to understand grammaticalization. That has been confirmed in great detail by Sarah Taub, in one of the most extensive studies of grammaticalization to date -- on Uighur. Any serious look at the grammaticalization literature in recent years will see the role of cognitive semantics -- especially metonymy and metaphor -- in grammaticalization. >2. ACQUISITION: As Dan Slobin has shown, the semantics precedes the >acquistion of much of grammar. As for the acquisition of semantics, there I suggest you read Terry Regier's THE HUMAN SEMANTIC POTENTIAL, MIT Press, 1996. Regier's acquistion model shows that spatial relations concepts and terms (not counting 3D and force-dynamics) in asignificant range of the world's languages can be accomplished on the basis of structured neural models -- with the structure given by models of neural structures known to exist in the brain: topographic maps of the visual field, orientation-sensitive cells, center-surround receptive fields, and so on. The point is that NONLINGUISTIC aspects of brain structure can be used to learn conceptual and linguistic elements. Regier also shows how this can be done with NO NEGATIVE INPUT. Regier did his work in our (Feldman's and my) group several years ago. For recent work on related topics from the ICSI Lzero group, check out the Lzero website at icsi.berkeley.edu/lzero. Other topics have included the learning of verbs of hand motion, aspect, and metaphor. The point is to do this in a way that is biologically and cognitively realistic. > >3. PIDGINS AND CREOLES: We have been observing for a long time the >peculiar consequences of having no morpho-syntax Ron and I of course recognize morphosyntax. We just give a nonautonomous theory of it. I happen to like most of the work you cited, much of which is not set in terms of autonomous syntax. >4. CROSS-LANGUAGE TYPOLOGICAL VARIATION: As Bill Croft, Suzanne Kemmer, Ron Langacker and others have been writing for years, a linguistics based in cognitive semantics does a better job at getting at cross-linguistic typological variation. Functionalist studies of classifier languages (Collete Craig) and speech act indicators (Nichols and Chafe) show that such typological cannot be done with an autonomous syntax that does not admit semantic factors. > >5. DISCOURSE: Again, Ron and I support a nonautonmous theory of grammar and of constructions, Our work fits very will with the work you cite by Wally, Sandy, Jack, and other of our friends and colleagues. I can't imaginine any of them supporting an autonomous syntax. Indeed, the whole idea of the CSDL (Conceptual Structure, Discourse and Language) conferences was to bring together the cognitive and functionalist approaches into a unified group. The first two conferences were enormously successful. I know you didn't attend either, Tom, but why come to this year's conference at Boulder. It should be a good as the the others. For those who have bought it yet, I recommend the proceedings of the first conference, CONCEPTUAL STRUCTURE, DISCOURSE, AND LANGUAGE (edited by Adele Goldberg and published by CSLI publications, distributed by Cambridge University Press). > >6. COGNITIVE PSYCHOLINGUISTICS: Same point again. The Nijmegen group has >been studying universals of spatial relations, which requires a form of >cognitive semantics (not formal semantics or formal syntax). Brian and Liz >have been arguing for years against autonomous syntax and the language >box. For an extensive study from cognitive psycholinguistics supporting the results of cognitive semantics, read Ray Gibbs' THE POETICS OF MIND, published by Cambridge University Press. If those experiments don't convince you, I don't know what will. >6. NEUROLOGY: We all know localization is a complicated, that grammar >in adults is distributed across many "modules". Sure, the modules bear >little resemblance to their Jerry Fodor name-sakes. They interact, they >talk to each other, they are NOT encapsulated, they collaborate with >"cognitive" modules (attention, activation, memory, intention, pragmatic >zooms, etc.). Exactly the point. No syntax module without input from the reset of the brain, hence no autonomous syntax. But however widely distributed, portions of this complex >mechanism can be knocked out selectively by lesions. What is it, George, >that aphasics have lost, exactly? The Damasios have answered that: Connections. Not localized modules. You study their transcribed discourse >(Menn and Obler, eds 1990, e.g.), and you notice that > (i) the lexicon is there. nouns, verbs, adjectives. > (ii) the coherence of discourse is still there (referential coherence, > temporal coherence, all the measurables). >So what is it that is NOT there? Again, connections. Drop a note to Liz Bates at UCSD for all her many surveys of arguments against the syntax module. Agrammatic patients have been shown to be able to make grammaticality judgments (Linebarger, et al). Liz cites her own work with an agrammatic patient in Italy ( a well-educted architect) who could not repeat a grammatical sentence, but could only say one word: the Greek grammatical term for the syntactic phenomenon! The point: It is now well established that agrammatism is not the wiping out of a supposed "syntax module". About connectionism, there is a big big difference between PDP connectionism and structured connectionism. I was explicitly talking about the latter. The former cannot account for most linguistic phenomena. > >7. EVOLUTION: Your remarks about iconicity attest to the inadequacy of autonomous syntax. Iconic constructions require the pairing of form and meaning. That cannot be done in autonomous syntax. For a discussion in my work, see Chapter 20 of Metaphors We Live By. A magnificent study of the role of cognition, especially cognitive semantics, in iconicty is now in progress -- Sarah Taub's dissertation on ASL. If you have never heard Sarah talk on the subject, you should. Invite her up to Oregon as soon as possible! In the course of evolution, layers have been added to the brain. Higher cognition is done in the neocortex, which is furthest from direct bodily input and which takes input from layers closer to bodily input. The study of conceptual metaphor show that that huge system is grounded in the perceptual and motor system and that abstract concepts tend to be conceptualized in bodily terms. This is just what one would predict given the evolutionary structure of the brain. Tom, your posting was useful because all the topics you mentioned are important, and indeed support the cognitive position (and other work on nonautonomous grammar, like most functionalist work). I'm sorry you didn't understand what I was saying. I've written a lot on the subject, so I thought it would be clear, but maybe it wasn't for those not into the cognitive literature. For that, I apologize. I hope this clarifies the position. I'm also glad that Joan and Dan are really into your work Tom. I hope that they been reading other functionalists, and maybe they'll get to the cognitive literature too that way. I agree with Tony Wright. Serious discussion is needed. It should be based on serious reading, of course. Gotta get back to book writing. Take care, Tom, and Happy New Year. You're invited for a beer next time you're in Berkeley or I'm in Eugene or if you decide to go to Boulder for CSDL. Let's try to talk things out calmly and in detail. Best wishes, George From lakoff at COGSCI.BERKELEY.EDU Tue Jan 7 20:16:55 1997 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Tue, 7 Jan 1997 12:16:55 -0800 Subject: REMINDER: CSDL 97 Abstract Deadline Message-ID: >Date: Mon, 6 Jan 1997 17:33:33 -0700 (MST) >From: "Laura A. Michaelis" >To: cogling at ucsd.edu >Subject: REMINDER: CSDL 97 Abstract Deadline >Mime-Version: 1.0 >Status: > > >REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER >*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97* > >The abstract submission deadline for CSDL '97, to take place at the >University of Colorado, Boulder May 24-26, 1997, is: > >JANUARY 17, 1997 > >SEND ABSTRACTS TO: > >CSDL Abstracts >Department of Linguistics CB 295 >University of Colorado >Boulder, CO 80309 > >Email submission is strongly encouraged. SEND *EMAIL* ABSTRACTS TO: > >csdl at babel.colorado.edu > >For further information on the conference and on abstract submission, see >our website: > >http://stripe.colorado.edu/~linguist/CSDL.html > > *** >Laura Michaelis >Dan Jurafsky >Barbara Fox > >CSDL 97 Program Committee From TGIVON at OREGON.UOREGON.EDU Tue Jan 7 20:49:26 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Tue, 7 Jan 1997 12:49:26 -0800 Subject: etc. Message-ID: Dear George, Thanks for your gracious response to my (rather) temperamental outburst. It seems like we have zero disagreement. You don't believe that syntax does not exist. I don't believe (never have, as you know) that syntax is autonomous. That is, after all, a criterial feature for functionalists. So if it was all a misunderstanding, let us kiss and make up. I do tend to let my temper fly on occasion. I think you're absolutely right, the most important thing is that people keep communicating and exchanging ideas, As you know, I've benefited enormously from your work, and I hope to continue to. I think part of the problem is the creatin of "labeled" sub-cultures that then tend to talk primarily to themselves. To my the label Cognitive Linguistics/Grammar somehow connotes that the rest of us are not cognitively oriented. This is what adjectives do, they tend to imply restriction. As you may have noticed, I have always refrained from putting a label on my work, or incorporating an official group. I continue to believe that what I (and we...) do is just plain linguistics, the unmarked case. I know this is not a politically popular attitude, but I find the proliferation of "labeled" groups (the alphabet soup...) rather undig- nified. Thanks again for your graciousness, George. And Happy New Year. TG From dquesada at CHASS.UTORONTO.CA Wed Jan 8 01:44:25 1997 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Tue, 7 Jan 1997 20:44:25 -0500 Subject: A non-issue of an important issue. In-Reply-To: <4D452461AF6@UCENGLISH.MCM.UC.EDU> Message-ID: On Mon, 6 Jan 1997 Carl.Mills at UC.EDU wrote: > Whenever anyone uses the word *meaning* in a serious linguistic > discussion, I put my hand on my wallet. That is nothing but (formalist?) prejudice. > Back in the 1930s, Ogden and Richards wrote an entire book called *The > Meaning of Meaning* in which they concluded, if I remember correctly, > that they thought maybe they didn't know. In other words, the word > *meaning* has a meaning that is both so broad and so vague as to render > *meaning* well nigh empty of empirical content. You are right, back in the 1930s. But this is the 1990's!!!, almost the 21st. C. A.D. Surely some progress must have been made during these 60-70 years, don't you think? That *meaning* is not reducible to something like: M = x + y/-r, etc. does not mean that our intuition (as linguists and speakers) and common sense cannot guide us when making analyses and claims about language. > If we are going to use meaning to decide issues relating to syntax, > including whether syntax exists, we need to agree on what we mean by > *meaning*. Ubi supra. The fact that we talk about the meaning of a lexeme that grammaticalizes or the meaning of a certain syntactic structure, etc. etc. is enough proof that we know what we mean by meaning. I cannot understand what the reason for complicating matters superfluously is. And though nobody seemed interested in responding to this, in my view, non-issue (maybe thereby showing that it is indeed a non-issue) I felt that just for the record a reply was in order. J. Diego Quesada University of Toronto From nuyts at UIA.UA.AC.BE Wed Jan 8 15:07:19 1997 From: nuyts at UIA.UA.AC.BE (Jan.Nuyts) Date: Wed, 8 Jan 1997 16:07:19 +0100 Subject: updated announcement bookseries Message-ID: This is an updated announcement of a new bookseries, first mailed on this list a few months ago. Please Post. *** Call for unpublished manuscripts *** - monographs or collected volumes - *** for a new book series *** HUMAN COGNITIVE PROCESSING An interdisciplinary series on language and other mental faculties Editors: Marcelo Dascal (Tel Aviv University) Raymond Gibbs (University of California at Santa Cruz) Jan Nuyts (University of Antwerp) Editorial address: Jan Nuyts University of Antwerp, Linguistics (GER) Universiteitsplein 1 B-2610 Wilrijk, Belgium e-mail: nuyts at uia.ua.ac.be Editorial Advisory Board: Melissa Bowerman (Psychology, MPI f. Psycholinguistics); Wallace Chafe (Linguistics, Univ. of California at Santa Barbara); Philip R. Cohen (AI, Oregon Grad. Inst. of Science & Techn.); Antonio Damasio (Neuroscience, Univ. of Iowa); Morton Ann Gernsbacher (Psychology, Univ. of Wisconsin); David McNeill (Psychology, Univ. of Chicago); Eric Pederson (Cogn. Anthropology, MPI f. Psycholinguistics); Fran‡ois Recanati (Philosophy, CREA); Sally Rice (Linguistics, Univ. of Alberta); Benny Shanon (Psychology, Hebrew Univ. of Jerusalem); Lokedra Shastri (AI, Univ. of California at Berkeley); Dan Slobin (Psychology, Univ. of California at Berkeley); Paul Thagard (Philosophy, Univ. of Waterloo). Publisher: John Benjamins Publishing Company, Amsterdam/Philadelphia Aim & Scope: HUMAN COGNITIVE PROCESSING aims to be a forum for interdisciplinary research on the cognitive structure and processing of language and its anchoring in the human cognitive or mental systems in general. It aims to publish high quality manuscripts which address problems related to the nature and organization of the cognitive or mental systems and processes involved in speaking and understanding natural language (including sign language), and the relationship of these systems and processes to other domains of human cognition, including general conceptual or knowledge systems and processes (the language and thought issue), and other perceptual or behavioral systems such as vision and non-verbal behavior (e.g. gesture). `Cognition' and `Mind' should be taken in their broadest sense, not only including the domain of rationality, but also dimensions such as emotion and the unconscious. The series is not bound to any theoretical paradigm or discipline: it is open to any type of approach to the above questions (methodologically and theoretically) and to research from any discipline concerned with them, including (but not restricted to) different branches of psychology, artificial intelligence and computer science, cognitive anthropology, linguistics, philosophy and neuroscience. HUMAN COGNITIVE PROCESSING especially welcomes research which makes an explicit attempt to cross the boundaries of these disciplines. PLEASE SEND IN A RESUME BEFORE SUBMITTING THE FULL MANUSCRIPT ***** Jan Nuyts phone: 32/3/820.27.73 University of Antwerp fax: 32/3/820.27.62 Linguistics email: nuyts at uia.ua.ac.be Universiteitsplein 1 B-2610 Wilrijk - Belgium From Carl.Mills at UC.EDU Wed Jan 8 15:29:05 1997 From: Carl.Mills at UC.EDU (Carl.Mills at UC.EDU) Date: Wed, 8 Jan 1997 10:29:05 -0500 Subject: A non-issue of an important issue Message-ID: J. Diego Quesada of the University of Toronto writes, in part, " You are right, back in the 1930s. But this is the 1990's!!!, almost the 21st. C. A.D. Surely some progress must have been made during these 60-70 years, don't you think? That *meaning* is not reducible to something like: M = x + y/-r, etc. does not mean that our intuition (as linguists and speakers) and common sense cannot guide us when making analyses and claims about language." and a bit later: "The fact that we talk about the meaning of a lexeme that grammaticalizes or the meaning of a certain syntactic structure, etc.etc. is enough proof that we know what we mean by meaning. I cannot understand what the reason for complicating matters superfluously is." I don't want to waste a lot of bandwidth on what is clearly a side issue, but it is statements like these that make communication difficult between functionalists and those of us who are not functionalists. The first passage quoted above seems to equate the passing of time with progress. And not very much progress has been made since the 1930s. As for "common sense," well, for a long time common sense had a lot of people convinced that the world was flat. The second passage contains an "argument" that is so vulnerable to a reductio ad absurdam that I hate to get into it. But late-19th-century physicists talked about the luminiferous ether without knowing what they meant by it--without knowing, in fact, that it didn't exist. We could add phlogiston, the philosopher's stone, and Bergson's elan vital. Carl From dick at LINGUISTICS.UCL.AC.UK Thu Jan 9 12:16:12 1997 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Thu, 9 Jan 1997 12:16:12 +0000 Subject: autonomous syntax Message-ID: Like Joan Bresnan I normally just `lurk' on this list, but it's been so interesting of late that I can't resist coming out of the shadows. I don't think I understand what George means by autonomous syntax (which he rejects), nor about whether he is thereby rejecting syntax in general (as opposed to grammar, which he certainly does accept). Here's his first statement: >Tom, I said grammar does exist. As has Langacker. What we say is that it >exists as constructional pairings of cognitive semantics and phonological >representations (word order included). Taken at face value, this seems to say that there are just two linguistic levels, semantics and phonology (just like Chomsky's two interfaces, in fact!). Nothing between meanings and syllables, not even words. Take English verbs, for example. How do we say that `future meaning' maps onto /wil/ (or some such), whereas past maps onto /d/ (or some such), and that these bits of phonology are on opposite sides of the bits that express the lexical meaning? Or that "will" may be separated from the lexical bit by the subject etc etc etc? Notice that meaning may not be relevant; e.g. possessive HAVE has the same meaning whether it's used as an auxiliary verb or as a full verb (1 vs 2). (1) Have you a car? (2) Do you have a car? The generalisations that distinguish auxiliary and full verbs are `autonymous', in the sense that they refer to words, word-classes and syntactic relations, without mentioning meaning (or phonology). But I certainly believe that its function is to help hearers and speakers handle meaning (functionalism), and that the way in which we organise the information in our brains is in terms of prototype-like structures (cognitivism). Do I accept or reject autonymous syntax? Am I a formalist, a functionalist, a cognitivist, or just confused? Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From lgarneau at HOTMAIL.COM Thu Jan 9 14:33:13 1997 From: lgarneau at HOTMAIL.COM (Luc Garneau) Date: Thu, 9 Jan 1997 14:33:13 -0000 Subject: deixis and demonstrative pronouns Message-ID: Hello All - I hate to interrupt the discussion of syntax, but I have been thinking and writing about deixis a lot, in particular regarding the different functions of the demonstrative pronouns "this" and "that". While I have had relatively little difficulty finding work on this concept in general (Halliday & Hasan - Cohesion in English had a nice section), I have had trouble finding a whole lot further dealing specifically with these words...can anyone recommend anything? Thanks very much for any help! Luc Garneau --------------------------------------------------------- Get Your *Web-Based* Free Email at http://www.hotmail.com --------------------------------------------------------- From harder at COCO.IHI.KU.DK Thu Jan 9 15:32:42 1997 From: harder at COCO.IHI.KU.DK (harder at COCO.IHI.KU.DK) Date: Thu, 9 Jan 1997 16:32:42 +0100 Subject: syntax and form-meaning pairs (=signs) Message-ID: This is not the first time that a discussion about syntax in relation to cognitive linguistics creates problems of understanding, in spite of the fact that neither functionalists nor cognitive linguists believe in autonomous syntax, and both believe in the existence of grammar. In order to make it clear that one can want to talk about syntax and still be dealing with semantic phenomena I have suggested the term 'content syntax' for those combinatorial relations that create larger meanings out of component meanings (e.g. the head-modifier relation, etc). When one is talking about content syntax, the issue is neither individual form-meaning pairs nor the chimera of autonomous syntax. This is important in relation to a point made in George Lakoff's second message. As I understand it, it seems to imply that if one agrees that there syntax can be described in terms of form-meaning pairs, there is no need to talk about the special properties of syntax. But the ability to combine meaning fragments into larger wholes does, from an evolutionary as well as a neurological point of view, seem to be rather a special skill. Saying that it is distributed over the brain and that it depends on connections rather than solely on a specific brain area does not appear to capture its special nature very precisely. The best way to show the superiority of a non-automous approach to syntax must be to show how the special nature of the ability to create complex expressions can be captured in a framework where the combimation of meanings is the essential part. But this requires recognizing that the combinatory skill has properties that are different from the ability to associate form and meaning in a holophrastic sign. Mechanisms of combination do not disappear as a special problem in its own right, even if it is non-autonomous and involves form-meaning pairing. Peter Harder, U of Copenhagen From fjn at U.WASHINGTON.EDU Thu Jan 9 21:50:57 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Thu, 9 Jan 1997 13:50:57 -0800 Subject: autonomous syntax In-Reply-To: <9701091059.AB23955@crow.phon.ucl.ac.uk> Message-ID: Along with some of the other contributors to the discussion of syntax, grammar, and autonomy, I've decided to stop lurking in the woodwork. I think that I can speak for a majority of 'orthodox' generative grammarians when I assert that the question of the autonomy of syntax (AS) has nothing whatever to do with the 'fit' between (surface) form and meaning. Thus George's and Talmy's critiques of autonomy are not to the point. The AS hypothesis is not one about the relationship between form and meaning, but rather one about the relationship between 'form and form'. AS holds that a central component of language is a *formal system*, that is, a system whose principles refer only to formal elements and that this system is responsible for capturing many (I stress 'many', not 'all') profound generalizations about grammatical patterning. Let's take the most extreme 'cognitive linguistics' position, held, I think, by Anna Wierzbicka. According to this position, any particular observable formal aspect of language (e.g. categories, constructions, morphemes, etc.) can be characterized by necessary and sufficient semantic conditions. Is this position compatible with AS? Certainly it is. One need only go on to show that grammatical patterning is also to a large degree governed by more abstract relationship among formal elements that are not replaceable by statements whose primitive terms are semantic. A brilliant demonstration to this effect (and hence support for AS) has been provided by Matthew Dryer in an article in LANGUAGE. Dryer shows that the underlying generalization governing the Greenbergian word order correlations is not a semantic one (e.g. head-dependent relations or whatever), but rather the *principal branching direction* of phrase structure in the language. The two often are in accord, of course; where they conflict it is the abstract structural relationships provided by formal grammar that win out. In an in-preparation work, I argue that this is the norm for language. Yes, the fit between surface form and meaning is quite close. But yes, also, formal patterning has a 'life of its own', as is asserted by AS. Since we have also heard it claimed that the apparent nonlocalizability of syntax in the brain refutes AS, I'd like to address that question too. What precisely is implied by the claim that we are endowed with an innate UG module? Among other things, presumably that there are innate neural structures dedicated to this cognitive faculty. However, nothing whatever is entailed about the *location* of these neural structures or their degree of 'encapsulation' with respect to other neural structures. Perhaps they are all localized in one contiguous area of the brain. On the other hand, they might be distributed throughout the brain. It simply does not matter. Yet any number of critiques of AS have attempted to refute the idea of an innate UG when, in fact, they have done no more than refute a localist basis for it. One might object that Chomsky has invoked the image of the 'language organ' on a number of occasions, which has the effect of implying that the neural seat of UG must be localizable in some part of the brain. But it seems clear that his use of that expression is based on his hypothesis that it is determined by a genetic blueprint, not on its physical isolability. For example, he asserts that 'language is to be thought of on the *analogy* of a physical organ' (REFLECTIONS ON LANG, p. 59) and that 'we may *usefully* think of the [language faculty as] *analogous* to the heart or the visual system or the system of motor coordination and planning' (RULES & REPS, p. 39). So Chomsky clearly is thinking of language as something like an organ in a physiological, but not narrowly anatomical, sense. Steve Pinker has provided an interesting argument why the language faculty is *not* confined to one area of the brain. He notes that hips and hearts as 'organs that move stuff around in the physical world' have to have cohesive shapes. But the brain, as 'an organ of computation', needs only connectivity of neural microcircuitry to perform its specific tasks Q there is no reason that evolution would have favored each task being confined to one specific center. So, to conclude, AS is an empirical hypothesis and one which, I am sure, can be productively debated on Funknet. But it is important to focus on those questions that bear on its adequacy and to put aside irrelevant or tangential issues. --Fritz Newmeyer From jaske at ABACUS.BATES.EDU Thu Jan 9 22:30:03 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Thu, 9 Jan 1997 17:30:03 -0500 Subject: autonomous syntax Message-ID: I guess I should let Matthew Dryer answer this, but my impulsive nature prevents me from doing that. Frederick Newmeyer wrote: > > A brilliant demonstration to this effect (and hence support for AS) has > been provided by Matthew Dryer in an article in LANGUAGE. Dryer shows that > the underlying generalization governing the Greenbergian word order > correlations is not a semantic one (e.g. head-dependent relations or > whatever), but rather the *principal branching direction* of phrase > structure in the language. The two often are in accord, of course; where > they conflict it is the abstract structural relationships provided by > formal grammar that win out. In an in-preparation work, I argue that this > is the norm for language. Yes, the fit between surface form and meaning is > quite close. But yes, also, formal patterning has a 'life of its own', as > is asserted by AS. I reall fail to see how Dryer's generalization in any way shows anything about the autonomy of grammar. The relative degree to which languages display a consistent branching direction in a number of centripetal constructions can indeed be accounted by at least the following two facts: (1) diachronically: from the fact that certain types of complements, such as genitives, are often the source of other types of complements, such as relative clauses and adpositional constructions. (2) A 'relator-in-the-middle' iconic principle: there may be a preference for "relators" or function morphemes expressing the relation between heads and their complements for instance, such as adpositions, to be placed between the two elements related. Hence, postpositional phrases tend to precede the noun they complement and prepositional ones follow it. The former are left branching and the latter right branching. This *may* influence certain choices that speakers make diachronically leading to certain structural preferences in the system of constructions of a language. If there was anything like autonomy in grammar and Dryer's principle was a formal, even innate principle (subject to parametric variation), I would expect languages to be much more consistent than they are. The fact is that the constructions of a language are not all cut out of a same pattern at a synchronic level but, rather, are all more or less independent of each other (although they do seem to form a *system* of sorts). Many of these constructions are not even centripetal, and thus, the notion of branching is irrelevant in them. But the real question when it comes to autonomy vs. non-autonomy, is whether the constructions of a language can, or should, be described independently of the semantic and pragmatic meanings which they are used to express, and independently, for instance, of the iconic and universal principles, such as topic-comment or comment-topic, on which they are sometimes based. I don't think they can and I don't think they should. The simple reason for this is that I do not think that that is how humans learn or store the constructions of a language. Form is always stored and intimately connected to function and that is how it should be described and analyzed. I hope I didn't forget anything. Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Zeinek bera nolako, besteak uste halako "Everyone believes that everyone else is like them." From nick at STL.RESEARCH.PANASONIC.COM Fri Jan 10 00:01:48 1997 From: nick at STL.RESEARCH.PANASONIC.COM (Nicholas Kibre) Date: Thu, 9 Jan 1997 16:01:48 -0800 Subject: autonomy, etc. Message-ID: Ultimately, it seems that the autonomous syntax and and functionalist/cognitive position are more edges of a continuum than strictly opposing viewpoints. Nearly veryone agrees that language is shaped both by innate cognitive mechanisms, at least partially specialized for linguistic function, and by the demands of usage; our only point of disagreement is how tightly the former constraints the range of possible systems, and how much regularity is due to the pressures of the latter. What is striking is that, although almost everyone in this field seems to have strong feelings about what precise point along this spectrum is optimal, fairly little research really seems to address this issue empirically. Neurology may eventually be able to answer this question, but it may be a long wait! Ultimately, unless the issue is addressed directly, no amount of discussion between different camps is likely to convince anyone to change their mind. Currently, different linguists use different types of explanations, I think largely out of preference. I think if we want to resolve things more concretely, we need to think more about what the implications of claiming that a certain regularity is innate or functionally motivated would be. This is not meant to claim that I know what these implications would be. Any thoughts? Nick Kibre Btw: Thanks to all who contributed to my email corpus! ---------o--- Nicholas Kibre /'nihkahlahs 'kayber/ / Research Linguist/Speech Programmer / Panasonic Speech Technology Laboratory __=========__ 805 687 0110 xt 230 | |_|_|_|_| | nick at stl.research.panasonic.com |_|_______|_| --o=o---o=o-- http://humanitas.ucsb.edu/depts/linguistics/grads.html#kibre From ellen at CENTRAL.CIS.UPENN.EDU Fri Jan 10 03:46:58 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Thu, 9 Jan 1997 22:46:58 EST Subject: autonomous syntax In-Reply-To: Your message of "Thu, 09 Jan 1997 17:30:03 EST." <32D5716B.5322@abacus.bates.edu> Message-ID: jon aske wrote: >But the real question when it comes to autonomy vs. non-autonomy, is >whether the constructions of a language can, or should, be described >independently of the semantic and pragmatic meanings which they are used >to express, and independently, for instance, of the iconic and universal >principles, such as topic-comment or comment-topic, on which they are >sometimes based. > >I don't think they can and I don't think they should. The simple reason >for this is that I do not think that that is how humans learn or store >the constructions of a language. Form is always stored and intimately >connected to function and that is how it should be described and >analyzed. well, i guess it all depends on where you're looking. for quite a few years now i've been looking at cases of language contact where the discourse functions associated with a syntactic form in one language come to be associated with an 'analogous' syntactic form in a contact language (and where the analogy is statable in purely syntactic terms) and where the two forms in question may have originally had totally unrelated discourse functions. in fact, it is precisely by studying such cases that i have come to believe in autonomous syntax, since, if the form-function connection were permanent or driven by iconicity, i could simply not begin to explain the data. From bralich at HAWAII.EDU Fri Jan 10 06:34:15 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Thu, 9 Jan 1997 20:34:15 -1000 Subject: autonomous syntax Message-ID: At 05:46 PM 1/9/97 -1000, Ellen F. Prince wrote: >well, i guess it all depends on where you're looking. for quite a few >years now i've been looking at cases of language contact where the >discourse functions associated with a syntactic form in one language >come to be associated with an 'analogous' syntactic form in a contact >language (and where the analogy is statable in purely syntactic terms) >and where the two forms in question may have originally had totally >unrelated discourse functions. in fact, it is precisely by studying >such cases that i have come to believe in autonomous syntax, since, if >the form-function connection were permanent or driven by iconicity, i >could simply not begin to explain the data. This discussion of autonomous syntax has me somewhat baffled. For my thinking the discussion of whether or not syntax is autonomous is a little like asking if a skeleton is autonomous from the body it supports. Certainly, in some sense it is. But of course the body cannot survive without a skeleton and the skeleton cannot survive without the rest of the body. Thus, it is not at all autonomous. Given this rather ordinary observation, it strikes me as rather odd that there should be any discussion at all of the autonomy of syntax. It is just as autonomous to language as a skeleton is to the body. Taking either side of this issue is just missing the point and missing the reality of what language is. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From cumming at HUMANITAS.UCSB.EDU Fri Jan 10 07:20:01 1997 From: cumming at HUMANITAS.UCSB.EDU (Susanna Cumming) Date: Thu, 9 Jan 1997 23:20:01 -0800 Subject: No subject Message-ID: I'd like to reiterate here my original point, which seems to have gotten lost in the shuffle: is there any point in attempting sentence-level parsing? The issue is not so much the separation of syntactic analysis from semantic analysis (though I am certainly not in favor of that either, and I fully agree with Matthew's points on that topic), but the separation of linguistic analysis at any level from goal-driven, multi-functional, socially-enacted communicative context. In my view this is what crucially separates functionalists on the one hand from cognitivists on the other, or if you prefer discourse functionalists from cognitive functionalists: discourse folks believe that language removed from its communicative setting is sufficiently different from "real" communicative language that there's not much point in studying it, because you don't know what you've learned about real language when you're finished. If you take this point seriously there isn't much difference between the cognitivists and the formalists, since they are both (with some noble exceptions) content to base their analyses on "unnatural" data. In other words it's not "autonomy" that's the main problem, it's the "competence-performance" dichotomy. Susanna From dryer at ACSU.BUFFALO.EDU Fri Jan 10 08:17:41 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Fri, 10 Jan 1997 03:17:41 -0500 Subject: Reply to Fritz Newmeyer Message-ID: Different people use the expression "autonomous syntax in different ways, and these differences are often a source of confusion, as I think they have been in the current discussion. This medium is not very suitable for straightening these things out; but, very briefly, the expression is used for at least the following views: (1) people are born with innate syntactic knowledge that drives acquisition and explains universals (2) one can explain syntactic facts in terms of syntactic notions (3) syntax/grammar exists (although that too can mean different things) Arguments for the autonomy of syntax (such as some offered in print by Fritz Newmeyer) often involve no more than arguments for (3). For me (and I assume that this was what both George and Tom meant), rejecting autonomy of syntax involves rejecting (1) and (2). As for Fritz' claim that my evidence that the word order correlations involve branching direction rather than head position provides an argument for the autonomy of syntax, I would argue the opposite. Those who assume that the correlations reflect head position generally treat consistent head position as an explanation in itself. For example, a common position among formal linguists most closely aligned with Chomsky is that there is some sort of head-position parameter that is part of innate knowledge. Conversely, I have suggested that the tendency towards consistent branching direction reflects (in addition to grammaticization factors) parsing problems associated with mixed branching, i.e. "performance" problems extracting the intended meaning. If this view is correct, then the explanation lies in the nature of human working memory, and thus is inconsistent with notions (1) and (2) of autonomous syntax. Matthew Dryer From ellen at CENTRAL.CIS.UPENN.EDU Fri Jan 10 15:04:22 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Fri, 10 Jan 1997 10:04:22 EST Subject: autonomous syntax In-Reply-To: Your message of "Thu, 09 Jan 1997 20:34:15 -1000." <2.2.16.19970108203621.4b674888@pop-server.hawaii.edu> Message-ID: "Philip A. Bralich, Ph.D." writes: >This discussion of autonomous syntax has me somewhat baffled. For my >thinking the discussion of whether or not syntax is autonomous is a little >like asking if >a skeleton is autonomous from the body it supports. Certainly, in some sense >it is. But of course the body cannot survive without a skeleton and the >skeleton cannot survive without the rest of the body. Thus, it is not at >all autonomous. Given this rather ordinary observation, it strikes me as >rather odd that there should be any discussion at all of the autonomy of >syntax. It is just as autonomous to language as a skeleton is to the body. ^^^^^^^^ >Taking either side of this issue is just missing the point and missing the >reality of what language is. gee, well now THAT really clears things up, doesn't it? ;) uh, to my knowledge, no one has ever claimed that syntax is autonomous from *language*. shame, because it would be a claim that everyone from san diego to cambridge could agree to reject... :) From ellen at CENTRAL.CIS.UPENN.EDU Fri Jan 10 15:17:09 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Fri, 10 Jan 1997 10:17:09 EST Subject: No subject In-Reply-To: Your message of "Thu, 09 Jan 1997 23:20:01 PST." Message-ID: Susanna Cumming writes: >In my view this is what crucially separates functionalists on the one hand >from cognitivists on the other, or if you prefer discourse functionalists >from cognitive functionalists: discourse folks believe that language >removed from its communicative setting is sufficiently different from >"real" communicative language that there's not much point in studying it, >because you don't know what you've learned about real language when you're >finished. If you take this point seriously there isn't much difference >between the cognitivists and the formalists, since they are both (with >some noble exceptions) content to base their analyses on "unnatural" data. > >In other words it's not "autonomy" that's the main problem, it's the >"competence-performance" dichotomy. i think you're confusing what one takes to be the data and what one's ultimate theory looks like. there are those (incl me) that base their analyses on naturally-occurring data but that may wind up concluding that syntax is autonomous from meaning. in fact, i'd add that, if the choice of type of data locks one in to a particular conclusion, the actual research would seem pretty pointless... From lmenn at CLIPR.COLORADO.EDU Fri Jan 10 15:22:37 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Fri, 10 Jan 1997 08:22:37 -0700 Subject: your mail In-Reply-To: Message-ID: I'd like to take issue with the idea that syntax or lexicon can be studied only in natural contexts as much as with the idea that it can be properly studied without looking at those contexts. No biologist would argue either that test-tube studies are useless or that field studies are useless; the problems to be solved are so difficult that on the one hand, aritficially-controlled situations are needed to get a handle on what the basic processes might be, but on the other, field studies are needed to decide which of the processes that take place in the lab are actually operating in the real world. And in between the test-tube and the field are all sorts of intermediate experimental levels. In linguistics we similarly need a full spectrum of approaches; for the last several years, i've been working with colleagues on one type of experimental functional linguistics, using descriptions of minimal-pair sets of pictures to look at effects of empathy and inferrabilty of information (posters at LSA 1995 and 1997, paper in press, Brain and Language); Russ Tomlin has a more-controlled experimental approach with his fish videos. In the other direction, less-controlled but by the same token more natural, is the major 'Frog-story' work on narratives (Berman & Slobin), using a pictured story, and of course Chafe's Pear Stories. Lise Menn From jaske at ABACUS.BATES.EDU Fri Jan 10 16:06:59 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Fri, 10 Jan 1997 11:06:59 -0500 Subject: autonomous syntax Message-ID: Ellen F. Prince wrote: > > well, i guess it all depends on where you're looking. for quite a few > years now i've been looking at cases of language contact where the > discourse functions associated with a syntactic form in one language > come to be associated with an 'analogous' syntactic form in a contact > language (and where the analogy is statable in purely syntactic terms) > and where the two forms in question may have originally had totally > unrelated discourse functions. in fact, it is precisely by studying > such cases that i have come to believe in autonomous syntax, since, if > the form-function connection were permanent or driven by iconicity, i > could simply not begin to explain the data. That is very interesting and I would like to know more about your specific cases, but my experience with language contact and grammatical change, though it sounds similar to yours, has led me to very different conclusions. I have found that Basque seems to be increasing the number of clauses with postverbal elements, and the way it seems to be happening is that minor, marked (and ‘optional’) constructions which, for a variety of reasons, place the verb in rheme-initial position, and thus superficially look like the unmarked constructions of Romance languages, are being used more and more by those speakers which are most "under the influence" of a Romance language. The ‘overuse’ of these constructions results in a change in the contexts in which these constructions are used, i.e. in the pragmatics of those constructions. In other words, the constructions are becoming relatively less marked than we would expect them to be. This, of course is the well known phenomenon of convergence, a type of transfer that is rather common in language contact situations. Thus syntactic change is really pragmatic change in the constructions. As I see it, these pragmatics are not drafted onto otherwise formal constructions as an afterthought, but are an intrinsic part of them. The constructions in question do not make sense without reference to functional categories and the ordering relations are extremely iconic. Anyway, this is probably more than what anybody wanted to know, but since I don’t have anyone to talk to about my work in my exile, I thought I’d share it with you. (My LSA 97 paper on this very topic can be seen at http://www.bates.edu/~jaske/askeling.html). Best wishes, Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Balantza duen aldera erortzen da arbola "The tree falls towards the side it's leaning." From dever at VERB.LINGUIST.PITT.EDU Fri Jan 10 16:26:42 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Fri, 10 Jan 1997 11:26:42 -0500 Subject: Autonomous Syntax & Research In-Reply-To: <01IDWK5CL2KI935AF8@OREGON.UOREGON.EDU> Message-ID: Folks, I agree with Fritz on the idea of autonomous syntax, which won't surprise you, but of course it is a hypothesis, subject to empirical testing like any other. It has stood the test of time pretty well, though. But I am writing on something else, namely, what autonomous syntax has to do with research methodology, which is really how I interpret a lot of the comments on looking at texts. In my opinion the view one holds on the autonomous syntax thesis ought, in most cases at least, to have very little impact on methodology. Data should come from natural text *and* isolated sentences. In my fieldwork, I rely crucially on both. Usually, I take sentences from natural texts and study them in isolation (i.e. creating paradigms based on them and checking them with a variety of native speakers) after I have analyzed their role in the text from which they are extracted. But occasionally I need to look for aspectual (etc.) combinations that are rare or nonexistent (and whether or not they are nonexistent is often what I am trying to figure out). In these cases I put on what might look like little stage plays, working with various informants (I usually only do fieldwork in monolingual situations, having a bilingual informant is a luxury I have rarely had) to get at the examples (or not) that I am looking for. It is true that a lot of work in formal linguistics has been methodologically inferior to work done in functional linguistics, so if I were a functionalist, I might count that against formal approaches. But bad practice does not make bad theoretical assumptions, just, perhaps, bad theoretical results. There is no justification for this whatsoever and as a formal linguist, I am sorry that we have lagged behind (although notable exceptions come to mind, such as Ken Hale). -- DLE P.S. Carson Schutze has recently published a book on methodology (basically for formal linguists), which hopefully will contribute to change in formal theoretic work. ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From ellen at CENTRAL.CIS.UPENN.EDU Fri Jan 10 16:45:37 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Fri, 10 Jan 1997 11:45:37 EST Subject: autonomous syntax In-Reply-To: Your message of "Fri, 10 Jan 1997 11:06:59 EST." <32D66923.ED1@abacus.bates.edu> Message-ID: Jon Aske wrote: >That is very interesting and I would like to know more about your >specific cases, but my experience with language contact and grammatical >change, though it sounds similar to yours, has led me to very different >conclusions. > >I have found that Basque seems to be increasing the number of clauses >with postverbal elements, and the way it seems to be happening is that >minor, marked (and �optional�) constructions which, for a variety of >reasons, place the verb in rheme-initial position, and thus >superficially look like the unmarked constructions of Romance languages, >are being used more and more by those speakers which are most "under the >influence" of a Romance language. > >The �overuse� of these constructions results in a change in the contexts >in which these constructions are used, i.e. in the pragmatics of those >constructions. In other words, the constructions are becoming >relatively less marked than we would expect them to be. This, of course >is the well known phenomenon of convergence, a type of transfer that is >rather common in language contact situations. Thus syntactic change is >really pragmatic change in the constructions. As I see it, these >pragmatics are not drafted onto otherwise formal constructions as an >afterthought, but are an intrinsic part of them. The constructions in >question do not make sense without reference to functional categories >and the ordering relations are extremely iconic. wow, THAT is very interesting to me! i will d/l your paper and read it with interest -- but, from my understanding of what you say here, it sounds like what i've been finding: a particular form in one language (here the basque form with the verb in 'rheme-initial' position -- which i'm assuming is describable syntactically, without reference to 'rheme'?) is 'matched up' with an analogous form in another language (here canonical order in french), with the discourse function of the 'matchee' coming to be associated with the original form (here the LACK of discourse function of french canonical order bumping out the df of the basque form). i would venture that it is this lack of df that accounts for the increased frequency, not vice versa -- after all, a substantive discourse function constrains what contexts a form may felicitously occur in; an 'unmarked' form can occur in any context and should thus have a higher frequency. (this then would provide a different database to the children acquiring the language in terms of frequency, which may then result in their hypothesizing a somewhat different grammar.) if that is a reasonable description of what you've found, then i'd think you'd agree that form and function are NOT inextricably combined... otherwise how could a form ever change its function -- and how could analogous forms with different functions ever be seen as analogous by speakers? thanks for the post and i look forward to reading your paper! From fjn at U.WASHINGTON.EDU Fri Jan 10 17:04:27 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Fri, 10 Jan 1997 09:04:27 -0800 Subject: Reply to Fritz Newmeyer In-Reply-To: Message-ID: I find myself in the awkward and undesirable position of telling Matthew Dryer that he doesn't appreciate the implications of his own ground-breaking work. The sensitivity of speakers to the abstract (formal, structural) notion 'Phrase structure branching-direction', a notion that doesn't (as Dryer shows) correlate perfectly with semantic notions such as 'head-dependent' supports the idea that speakers mentally represent abstract phrase-structure relations INDEPENDENTLY of their semantic and functional 'implementations'. That is, it supports the autonomy of syntax. Matthew writes: > As for Fritz' claim that my evidence that the word order > correlations involve branching direction rather than head position > provides an argument for the autonomy of syntax, I would argue the > opposite. Those who assume that the correlations reflect head > position generally treat consistent head position as an explanation > in itself. For example, a common position among formal linguists > most closely aligned with Chomsky is that there is some sort of > head-position parameter that is part of innate knowledge. Yeah, but those people are wrong. As Dryer has shown, it's branching direction, not head position, that is relevant. My view is that branching direction is a MORE hard core 'formal' notion than head position, the determination of which always seemed to involve semantic considerations. And note that I never said anything about 'innateness' in this context. The question of the autonomy of syntax and that of innate syntactic principles have to be kept separate. The 'autonomy of syntax' is a fact about mature grammars. What the language learner draws upon to construct that grammar is a separate issue. So while innate syntactic principles entail (I think) autonomous syntax, autonomous syntax does not ential innate syntactic principles. Matthew goes on to write: > Conversely, I have suggested that the tendency towards consistent > branching direction reflects (in addition to grammaticization > factors) parsing problems associated with mixed branching, i.e. > "performance"problems extracting the intended meaning. If this > view is correct, then the explanation lies in the nature of human > working memory, and thus is inconsistent with notions (1) and (2) of > autonomous syntax. I agree that there is a functional explanation (parsing-based) for why speakers prefer consistent branching direction. That doesn't challenge the autonomy of syntax, since AS is a claim about what speakers mentally represent, not what they 'prefer'. In other words, functional explanation is perfectly compatible with AS. Let me give a chess analogy. Nobody could deny that the rules of chess (pieces, possible moves, etc.) form an autonomous system. But functional factors could have (indeed, surely did) enter into the design of the system. A ruling from the International Chess Authority could change the rules (resulting in a different, but still autonomous, system). Furthermore, when playing a game we have a choice as which pieces to play, which moves to make. Syntax, then, is autonomous in very much the same way that chess is autonomous. We mentally represent an autonomous system. Why that system has the properties that it has is another question. One answer is, as Matthew points out, is pressure from the parser. Another is probably pressure for iconic representations (see my paper in LANGUAGE of a few years ago.) Another may be innate syntactic principles. By the way, the most compelling, in my view, parsing explanation for Dryer's generalzation is Jack Hawkins' principle of 'Early Immediate Constituents' (see his book A PERFORMANCE THEORY OF ORDER AND CONSTITUENCY). Hawkins is absolutely explicit that parsing explanations are compatible with autonomy; ideed, he sees the former as a partial explanation for the latter. Hawkins goes on to write: "More generally, the very autonomy of syntax may owe its existence to the need to make semantics processable for both the speaker and the hearer, and it remains to be seen whether any precision can be given to the formula: semantics + processing = syntax." (Hawkins 1994: 439) While his formula (as he would I am sure agree) is too simple, I basically agree with him. --fritz newmeyer From dquesada at CHASS.UTORONTO.CA Fri Jan 10 18:02:23 1997 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Fri, 10 Jan 1997 13:02:23 -0500 Subject: Chess and Syntax In-Reply-To: Message-ID: On Fri, 10 Jan 1997, Frederick Newmeyer wrote: > Nobody could > deny that the rules of chess (pieces, possible moves, etc.) form an > autonomous system. But functional factors could have (indeed, surely did) > enter into the design of the system. A ruling from the International Chess > Authority could change the rules (resulting in a different, but still > autonomous, system). Furthermore, when playing a game we have a choice as > which pieces to play, which moves to make. > > Syntax, then, is autonomous in very much the same way that chess is > autonomous. We mentally represent an autonomous system. The analogy is not that felicitous. Fritz overlooks a crucial aspect of these two putatively autonomous games [it's revealing that an analogy was drawn from a game; indeed formal linguistics sometimes seems no more than an intelectual excercise for the sake of entertainment, but that's another disk], namely that in order for every piece to move, and how to move it, one needs to know what is it, in linguistic terms this means that we need to know what the MEANING of the combining element is; otherwise one could have the horse move diagonaly, the tower jump in all directions and so on, just as constitutents could be shifted around irrespective of what they mean. As G. Lakoff (if I remember correctly) put it and as -I assume- all Funknetters think, as long as any combination is determined by semantic content there can be no autonomy. As for innateness (of either the so-called UG, or autonomous syntax), I don't think we are explaing much by clinging to that, which is, at its best, a truism: it all boils down to saying that language is exclusive of humans. That might have been perceived as a revolutioary time bomb back in 1957 in Skinner-influenced linguistics. Nowadays it is simply a trivial fact as saying that all languages have vowels. It happens that many of the so-called "hypotheses" in generative grammar turn out, on chronological account, to have been "patches" to objections; for instance, the so-called mentalism and innateness. Givon (1984: 7) has already pointed out this when he says that "Whether the particular mix and its coherence or lack thereof were the product of design or accident is still a matter of debate". I guess he was trying to be diplomatic... J. Diego Quesada University of Toronto From chafe at HUMANITAS.UCSB.EDU Fri Jan 10 18:38:04 1997 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Fri, 10 Jan 1997 10:38:04 -0800 Subject: Another view Message-ID: Try looking at it this way. A language is fundamentally a way of associating meanings with sounds (and/or some other symbolic medium). Meanings (let's not get hung up on the term) are mixtures of cognitive, emotive, interactive, and intratextual information, covering all facets of human experience. A language imposes on experience a huge, complicated set of meaning elements and ways of combining them, just as it imposes sound elements and ways of combining them. Much of this is language- particular, but some is universal for various reasons, only one of which may be innateness. One might be able to imagine a language in which meanings were associated with sounds in a direct, unmediated way, but no real language is like that, and the reason is that languages change. In all languages grammatic(al)ization, lexicalization, and the analogic extension of patterns has produced situations in which functionally active meanings are often symbolized in the first instance by partially or wholly fossilized formerly functional elements and combinations, whose own associations with sounds may nevertheless remain intact. (I say partially fossilized because sometimes there is leakage back into at least semiactive consciousness, as with awareness of the literal meanings of idioms and metaphors.) One linguist may look at this situation and say, "Aha, there's a lot here that is arbitrary and nonfunctional. Hurrah for autonomous syntax!" Another linguist may look at the same situation and say, "There's a lot here that is motivated, and when it seems not to be, and when we're lucky enough to know something about how it got to be this way, we can see that there once was a motivation that has now been obscured by grammaticalization etc." My own opinion is that we ought to be looking for functional motivations wherever we can find them, and that an autonomous syntax based on elements that have never had anything but a formal, otherwise unmotivated status provides nothing more than a way of feeling happy about a failure to probe toward a deeper understanding, including in many cases a historical one. There seem to be three major ways in which spinners of theories connect with reality. One is through observing how people actually talk, one is through doing experiments, and one is through inventing isolated sentences and judging their grammaticality. Each has its advantages and disadvantages, and improved understanding ought to come from a judicious mixture of the three (as just emphasized by Lise Menn), though unfortunately we are all biased by training and experience to do mainly one, and sometimes even sneer at the others. It may have relevance to this debate that there has indeed been a correlation between the autonomous syntax approach and the use of grammaticality judgments. It looks, too, as if those who observe how people actually talk tend on the whole to be the least enchanted by the autonomous approach. Dan Everett's remarks on this score are particularly welcome, however. As for parsing, where this discussion began, it's useful in illuminating some of the patterns that exist in the intermediate area between meanings and sounds. But whether those patterns form an intact skeleton that can be studied apart from the meat attached to it has always seemed to me, at least, quite dubious. In any case, if a machine were ever truly to understand something that was said to it, its understanding would have to be in cognitive, affective, and social terms--in terms of all facets of human experience--which lie quite beyond anything presently available in the computer world. Wally Chafe From jaske at ABACUS.BATES.EDU Fri Jan 10 18:57:53 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Fri, 10 Jan 1997 13:57:53 -0500 Subject: Reply to Fritz Newmeyer Message-ID: First of all, I agree with Matthew's interpretation of his facts, and disagree with Fritz's reinterpretation. But I don't think either one will convince the other. About parsing and branching preferences, I think Jack Hawkins' study is somewhat artificial and doesn't reflect realistically what actual language in use is like, and that it ignores many other aspects of language, such as the use of intonation to disambiguate structures and the average length of clauses, which is actually quite short in 99.99% of clauses in actual speech. If branching was such an important motivation in determining the form of constructions, we would expect a majority of languages, if not all, to be right branching for example. I think a major problem here involves what we take syntax/grammar to be, not just whether it is autonomous from other things. In my book, grammar is a set of relatively independent, and relatively interdependent constructions (ie a "leaky system" of constructions). Semantics, pragmatics, and processing and other cognitive constraints, have a lot to do with the form of those constructions, particularly how they come about diachronically. I am not sure, however, that all of these diachronic motivations are equally relevant synchronically, that is in the interpretation that speakers make of those constructions. A major problem with autonomous approaches, as I see it, is that (1) they attempt to explain synchronically (ie as part of the internalized grammar) formal correlations which are not synchronically real (such as branching direction, active and passive constructions, order of affixes, etc.), but which stem from diachronic sources, thus "recapitulating diachrony", and (2) that they do not attempt to explain actual iconic correlations between form and function as found in many constructions, particularly speech act constructions. The pragmatic motivations which are often grammaticalized into constructions do not simply vanish once they have eft their imprint on those constructions, as Fritz has argued elsewhere, but I think they are a very important part of how speakers interpret, store, and use those constructions. Anyway, that's more than enough for today. I'll be delighted to hear any responses anyone may have to these thoughts. Jon Frederick Newmeyer wrote: > > and it remains to be seen whether any precision can be given to the > formula: semantics + processing = syntax." (Hawkins 1994: 439) > > While his formula (as he would I am sure agree) is too simple, I basically > agree with him. -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Balantza duen aldera erortzen da arbola "The tree falls towards the side it's leaning." From bralich at HAWAII.EDU Fri Jan 10 20:08:47 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Fri, 10 Jan 1997 10:08:47 -1000 Subject: Another view Message-ID: At 08:38 AM 1/10/97 -1000, Wallace Chafe wrote: > >As for parsing, where this discussion began, it's useful in illuminating >some of the patterns that exist in the intermediate area between >meanings and sounds. But whether those patterns form an intact >skeleton that can be studied apart from the meat attached to it has >always seemed to me, at least, quite dubious. I am sure this is true for medicine as well, studying a skeleton removed from a body tells you very little about its interaction with the body, but we are missing a pretty significant area of study if we do not have orthopedics. The usefulness and meaningfulness of a skeleton that has been removed from its body is similar to the usefulness and meaningfulness of a syntax that has been removed from its larger functional setting. We can no more get rid of autonomous syntax or functional syntax than we can wholistic medicine or orthopedics. There is simply no debate here. We as linguists cannot ignore either one of these realities whether or not we choose to specialize in syntax (orthopedics) or wholistic medicine (functional grammar). >In any case, if a machine >were ever truly to understand something that was said to it, its >understanding would have to be in cognitive, affective, and social >terms--in terms of all facets of human experience--which lie quite >beyond anything presently available in the computer world. But from the point of view of machines understanding language. We are 50 - 100 years away from that. Let's not insist on jet engines when we still haven't worked the bugs out of hot air balloons. For now, we can get computers to respond to significantly more lanugage by coupling a completely worked syntactic analysis with the orthography. This will take us a step toward machine's understanding. But first, let's do a full and proper analysis of small and medium sentences before we proceed to other areas. Everyone looking at parsing seems to want to begin with machines that are as fully capable of language as are humans. This is a mistake. Let's take the state of the art as it is and begin there. Let's not insist that medicine cure aids before we work on cuts and scratches and lets not insist on full understanding before we work on parsers. Let's also not be fooled by the state of the art. Insist on clear demonstrations of what is and is not possible with a parser, and then let's see where we can go from there. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From TGIVON at OREGON.UOREGON.EDU Fri Jan 10 20:24:02 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Fri, 10 Jan 1997 12:24:02 -0800 Subject: back and forth Message-ID: Seems to me I hear another argument between the partly deaf. Everybody concede that a correlation exists between grammatical structures and semantic and/or pragmatic functions. But two extremist groups seem to draw rather stark conclusions from the fact that the correlation is less-than-fully-perfect. The "autonomy" people seem to reason: "If less than 100% perfect correlation, therefore no correlation (i.e. 'liberated' structure)" The grammar-denial folks, all of them card-carrying functionalists, seem to reason: "If less than 100% generative and autonomous, therefore 100% functionally motivated" Now notice that both use a very similar Platonic reasoning: STRUCTURALISTS: "Something is functionally motivated only if its is 100% so" ANTI-GRAMMFUNK: "Something cannot have independently-manifested existence unless it is 100% so" This is of course a silly argument, and that's why people keep repeating the same reductionist extreme assertions again and again and again. The facts suggests that neither extreme positions could be right. Grammar is heavily motivated by semantic/communicative functions. But -- because of grammaticalization and what Haiman calls 'ritualizatio' it never is o100% so. It acquires a CERTAIN DEGREE of its own independent life. This, however, does not mean 100% autonomy. And by the way, along the diachronic cycle of grammaticalization, you can see a construction changing its degree of iconicity. So that overall in the aggregate of a synchronic grammar at any diachronic point, you can show constructions that are much better correlated to their functions and some that are much less so. To those of you who know something about biology (and sorry Fritz and Elen, I don't cont you among those...), this story of course looks rather familiar. It also looks familiar in respect to another topic Ellen raised (without calling it by its rightful name -- that of cross-language typological vartiation. Yes, we do see the same communicative function coded in different languages coded by different grammatical constructions. That is because there is not only one way to perform a function, there are alternative choices. Think about the function of AMBULASTION. There are four major types bio-organisms seem to perform it: Walking, slithering, flying and swimming. And among each of those there are minor sub-types. And each one of those is associated wiuth its own -- highly adapted, rather specific -- correlated structures. Now, does that mean that structure is independent from function? Get real. Bloomfield and his cohorts thought that cross-linguistic typological variability suggested structures were 100% unconstrained by meaning. But most serious typologists are keenly aware of how CONSTRAINED is the range of syntactic types (major ones) that can perform the same communicative functions. IN REL-clauses, for example, I have not been able to find more that 5-6 major types. In passivization (impersonal agent) maybe 4-5, etc. etc. This extreme paucity sould be appreciated against the vast number of mathematically-possible types. The idea that 'universality' means 100% universality is another version of reductionist Platonic thinking. Species, biological populations, are defined by evolutionary biologists (see Futuyma's text, eg) as A CURVE OF DISTRIBUTION OF VARIANTS. These guys will tell you that VARIATION GUARANTEES EVOLUTION. So the pernicious idea that somehow universality demands 100% uniformity is really a bit primitive. As is the idea that because there is more than one way of skinning a cat, ways of skinning are not closely dependent upon the task at hand -- skinning a cat. But of course, Aristotle in his brilliant rejection of structuralism in Biology already said all that. One sometimes wonders why after 2300 years it seems nobody is listening. Y'all be good y'hear. TG From fjn at U.WASHINGTON.EDU Fri Jan 10 21:47:24 1997 From: fjn at U.WASHINGTON.EDU (F. Newmeyer) Date: Fri, 10 Jan 1997 13:47:24 -0800 Subject: Chess and Syntax In-Reply-To: Your message of Fri, 10 Jan 1997 13:02:23 -0500 (EST) Message-ID: Diego Quesada writes: > The analogy is not that felicitous. Fritz overlooks a > crucial aspect of these two putatively autonomous games > [it's revealing that an analogy was drawn from a game; > indeed formal linguistics sometimes seems no more than > an intelectual excercise for the sake of entertainment, > but that's another disk], namely that in order for every > piece to move, and how to move it, one needs to know > what is it, in linguistic terms this means that we need > to know what the MEANING of the combining element is; > otherwise one could have the horse move diagonaly, the > tower jump in all directions and so on, just as > constitutents could be shifted around irrespective of > what they mean. As G. Lakoff (if I remember correctly) > put it and as -I assume- all Funknetters think, as long > as any combination is determined by semantic content > there can be no autonomy. Saying that the 'meaning' of a rook is the ability to move in a straight line harkens back to the crudest use / instrumentalist theories of meaning that were rejected by virtually all linguists and philosophers of language decades ago and are *surely* rejected by 'cognitive linguists'. It reminds me of things that people used to say long ago like 'the meaning of stops in German is to devoice finally' or 'the meaning of the English auxiliary is to front in questions'. So, as far as I can see, my chess analogy still holds (at the level of discussion). --fritz newmeyer From lmenn at CLIPR.COLORADO.EDU Fri Jan 10 21:32:57 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Fri, 10 Jan 1997 14:32:57 -0700 Subject: autonomous syntax In-Reply-To: <32D66923.ED1@abacus.bates.edu> Message-ID: Ann Peters gave me a very nice analogy for the diachronic relationship between form and function: it's like two legs loosely hobbled together - they have a certain amount of independence, so can go off in separate directions, but not very far before one pulls on the other. Lise Menn From jaske at ABACUS.BATES.EDU Fri Jan 10 23:12:31 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Fri, 10 Jan 1997 18:12:31 -0500 Subject: back and forth Message-ID: I think Tom is right on target. All this reminds me of an interesting quote in a recent interview with Ray Jackendoff, which suggests that the pursuit of autonomy was not really inevitable. It resulted really from not being able to come up with a way of making the form-function connection work right. He said: "if you look back at Syntactic Structures, Chomsky said, "Semantics is semi-systematically connected with syntax - systematically enough that we want to account for it, but not systematically enough that we can use it as a key to determine how the syntax works. We have to do the syntax autonomously." His program was to show we have to (and can) do the syntax autonomously. And he never really worried about a systematic connection to semantics. I think it was really Katz and Postal who forced him to it"" (Ray Jackendoff, in conversation with John Goldsmith; in Huck and Goldsmith 1995:98-99). -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Balantza duen aldera erortzen da arbola "The tree falls towards the side it's leaning." From dquesada at CHASS.UTORONTO.CA Sat Jan 11 01:08:54 1997 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Fri, 10 Jan 1997 20:08:54 -0500 Subject: Chess and Syntax In-Reply-To: Message-ID: On Fri, 10 Jan 1997, F. Newmeyer wrote: > Saying that the 'meaning' of a rook is the ability to move in a straight line > harkens back to the crudest use / instrumentalist theories of meaning that were > rejected by virtually all linguists and philosophers of language decades ago and > are *surely* rejected by 'cognitive linguists'. It reminds me of things that > people used to say long ago like 'the meaning of stops in German is to devoice > finally' or 'the meaning of the English auxiliary is to front in questions'. Indeed, but, who is saying that? I reiterate that the meaning of a constituent DETERMINES (in the strong version) or CONDITIONS (in the weak version) both its combinatorial properties and the functions it can perform in a language L. We thus come back to the present where, after decades of mechanicism, the essentials of language are given its deserved place. > So, as far as I can see, my chess analogy still holds (at the level of > discussion). Ubi supra. Diego From pesetsk at MIT.EDU Sat Jan 11 03:11:41 1997 From: pesetsk at MIT.EDU (David Pesetsky) Date: Fri, 10 Jan 1997 22:11:41 -0500 Subject: back and forth Message-ID: At 12:24 PM -0800 1/10/97, Tom Givon wrote: > Seems to me I hear another argument between the partly deaf. Everybody > concede that a correlation exists between grammatical structures and > semantic and/or pragmatic functions. But two extremist groups seem to > draw rather stark conclusions from the fact that the correlation is > less-than-fully-perfect. The "autonomy" people seem to reason: > "If less than 100% perfect correlation, > therefore no correlation (i.e. 'liberated' structure)" Do "autonomy people" really reason like this? I don't think so. In fact, I think it's just the opposite. Isn't most of the research by "autonomy people" actually devoted to the hunch that there is a nearly *perfect* 100% correlation between grammatical structure and semantic/pragmatic function -- and that "less than 100%" correlations are actually 100% correlations obscured by other factors? - What, after all, is the functional category boom about, if not a (possibly overenthusiastic) attempt to investigate a 100% correlation hypothesis for properties like tense, agreement, topic, focus, and so on? - What was the motivation for the hypothesis of "covert movement" (LF movement), if not the hunch that the correlation between grammatical and semantic/pragmatic structure is tighter than it appears? - Why all the effort expended on the unaccusative hypothesis, the Universal Alignment Hypothesis, and Baker's UTAH, if not in service of the hypothesis that non-correlations between semantic function and grammatical form are only superficial? I think one might make the case that formalist "autonomy people" are among the most faithful functionalists. What divides linguists in this debate is not, I suspect, their faith in robust form-function correlations, but rather their hunches about the repertoire of factors that *obscure* these correlations. That's where many of us really do disagree with each other. -David Pesetsky ************************************************************************* Prof. David Pesetsky, Dept. of Linguistics and Philosophy 20D-219 MIT, Cambridge, MA 02139 USA (617) 253-0957 office (617) 253-5017 fax http://web.mit.edu/linguistics/www/pesetsky.html From M.Durie at LINGUISTICS.UNIMELB.EDU.AU Sat Jan 11 07:02:57 1997 From: M.Durie at LINGUISTICS.UNIMELB.EDU.AU (Mark Durie) Date: Sat, 11 Jan 1997 18:02:57 +1100 Subject: back and forth In-Reply-To: <9701110311.AA13894@MIT.MIT.EDU> Message-ID: Like others, I cannot resist throwing in my two bits worth. 1. The 'form-meaning' terminology has become extremely confusing, because many 'formal' approaches treat (either explicity or implicitly) at least some kinds of meaning as a kind of form (e.g. Jackendoff's conceptual structure, HPSG's treatment of meaning, the examples Pesetsky refers to in his recent posting etc. etc) in which semantic elements becomes yet another part of the system of grammar for which the linguistic is seeking to explicate principles of 'well-formed-ness'. 2. In a related vein, it is also confusing to treat form-meaning and form-function as somehow equivalent wordings. The discussions have swung backwards and forwards between talking about 'function' and 'meaning'. Surely some kinds of meaning are pretty good candidates for being inside the structural system of language (i.e. having the character of 'form'), and others are obviously not. My first point reflects this. 3. Isn't Fritz's chess analogy precisely Saussure's. Saussure used the chess analogy to illustrate what he saw to be a clear-cut difference between what is 'in' langue (='grammar' in generativist terminology, I suppose) and what is out of it, and the separate nature of the internal system as such. Even Sapir described language as an 'arbitrary system'. Of course Saussure included (a certain kind of) meaning in his 'system' but then so do many formal approaches today, as I noted above. So the autonomy hypothesis in Fritz's sense (divorced from any consideration of innateness) is structuralism. Or have I completely misunderstood? Mark. ------------------------------------ From: Mark Durie Department of Linguistics and Applied Linguistics University of Melbourne Parkville 3052 Hm (03) 9380-5247 Wk (03) 9344-5191 Fax (03) 9349-4326 M.Durie at linguistics.unimelb.edu.au http://www.arts.unimelb.edu.au/Dept/LALX/staff/durie.html From s_mjhall at EDUSERV.ITS.UNIMELB.EDU.AU Sat Jan 11 10:59:45 1997 From: s_mjhall at EDUSERV.ITS.UNIMELB.EDU.AU (michael hall) Date: Sat, 11 Jan 1997 21:59:45 +1100 Subject: Arabic Message-ID: I'd like to contact anyone with an interest in Arabic linguistics. My own interests lie in systemic functional linguistics and discourse analysis, but let's face it, Arabic linguists can afford to be fussy! Whatever your angle, drop me a line. Michael Hall. From dryer at ACSU.BUFFALO.EDU Sat Jan 11 14:06:42 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Sat, 11 Jan 1997 09:06:42 -0500 Subject: Response to Newmeyer's response Message-ID: While I think that some of the recent discussion is terminological (different people are using the term "autonomy" in different ways), Fritz' response to my comments reflects a substantive difference that is fundamental to differences between functionalist and "formalist" approaches. Fritz says >>The sensitivity of speakers to the abstract (formal, structural) >>notion 'Phrase structure branching-direction', a notion that >>doesn't (as Dryer shows) correlate perfectly with semantic >>notions such as 'head-dependent' supports the idea that speakers >>mentally represent abstract phrase-structure relations >>INDEPENDENTLY of their semantic and functional >>'implementations'. If by this Fritz means that speakers represent the fact that different structures in their language involve the same branching direction, then this doesn't follow at all. My hypothesis is that languages with both left and right branching result in structures with mixed branching that in language USE are slightly more difficult to process, and that over the millenia, this has influenced language change so that one finds crosslinguistic patterns reflecting a tendency toward more consistent direction of branching. But this does not entail that the fact that different structures in the language employ the same direction of branching is itself represented by speakers. Rather, it simply means that speaking a language in which structures branch in the same direction will result in slightly fewer instances of individuals' failing to extract the intended meaning of an utterance. A common assumption of much formal work is that the explanations are built into speakers' representations of their knowledge of their language. But since functionalist explanations obtain at the level of language USE, they are not in general part of the representations themselves. Consider the following analogy from biology. Selection of combinations of features that are advantageous to survival leads to individuals with those features surviving more often. But the fact that that combination of features is advantageous to survival is not itself represented in the genetic code or in the structures that result from their being advantageous. Matthew From dever at VERB.LINGUIST.PITT.EDU Sat Jan 11 15:20:45 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Sat, 11 Jan 1997 10:20:45 -0500 Subject: Intoning biology In-Reply-To: Message-ID: Folks, I would like to find a biological basis for human language/grammar as much as the next person, but we are going to get no closer to this goal if we talk aprioristically, nor if we make pronouncements on who does and who does not have a license to practice biological reasoning (as Tom Givon is wont to do). Matthew Dryer asks us to: > > Consider the following analogy from biology. Selection of > combinations of features that are advantageous to survival leads to > individuals with those features surviving more often. But the fact > that that combination of features is advantageous to survival is not > itself represented in the genetic code or in the structures that > result from their being advantageous. > > Matthew Are you sure that the genetic code has no syntax? But we are getting ahead of ourselves here. It is true that our research on human language must be constrained by well-established parameters of biologically relevant research, if that is what we want to relate to after all is said and done (works like the new _Evolution of Communication_ by Marc Hauser are thus a service to us all). But the starting point within these parameters must be to first offer an account of the phenomena that one thinks to be in need of explanation in the specific domain, e.g. grammar. To the degree that such an account predicts other phenomena and leads to a rich network of explanada and explanans that light our path in new exploration (melodramatic metaphor, but it is Saturday morning after all) we have something worth considering. Only after we have reached this point can we begin to discuss meaningfully the biological significance or implementation of our account. None of us, not even Tom G, knows what a biological account of the facts will look like until we agree on the facts to be explained, construct accounts for them, and evaluate the relative worth of the competing accounts. The point of my previous posting on methodology was to focus on how we do research and evaluate it within our domains of interest. If we cannot agree on the empirical success of competing accounts (or even what are competing accounts) within grammar, we certainly cannot argue on which one is biologically more plausible. Autonomy of syntax has to be evaluated wrt grammatical explanations. -- DLE From edith at CSD.UWM.EDU Sat Jan 11 20:28:59 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Sat, 11 Jan 1997 14:28:59 -0600 Subject: form versus meaning Message-ID: Here is another two-bit-worth of contribution. On Thursday, January 9, Jon Aske wrote the following: "But the real question when it comes to autonomy vs. non-autonomy, is whether the constructions of a language can, or should, be described independently of the semantic and pragmatic meanings which they are used to express, and independently, for instance, of the iconic and universal principles, such as topic-comment or comment-topic, on which they are sometimes based." "I don't think they can and I don't think they should. The simple reason for this is that I do not think that that is how humans learn or store the constructions of a language. Form is always stored and intimately connected to function and that is how it should be described and analyzed." I detect a slight non-sequitur here unless a hidden assumption is made explicit. If the goal is to describe how people learn and store constructions - and provided that it is indeed true that this happens always in terms of form and meaning being inseparable - then, indeed, the forms of constructions cannot and should not be described without their meanings. If, however, the goal is not to describe HOW people learn and store constructions but, rather, WHAT it is that people learn and store, then there is nothing wrong with trying to describe form in separation from meaning. It seems, in fact, that the description of syntactic form without regard to meaning is both possible and necessary. This is shown as follows. a/ THE POSSIBILITY OF DESCRIBING FORM AND MEANING SEPARATELY That describing syntactic form only is possible is shown by the fact that, given a set of sentences from a language with word boundaries indicated but no glosses provided, one can write a syntax by specifying the distribution (i.e., cooccurrence and order patterns) of the words. In order to make the description general, one will want to lump words into classes. These classes will, by definition, be syntactic categories (rather than semantic ones) in the sense that they were arrived at on the basis of purely syntactic (=distributional) information. Once the meanings of the sentences are also considered, some of the syntac- tic categories utilized in the description may turn out to be congruent with semantic classes while others are likely not to. I think this is what the autonomouns versus non-autonomous syntax debate is all about: whether there are any syntactic classes that are not also semantic ones (which is what autonomous syntax claims) or whether all syntactically- arrived categories coincide with classes of meaning elements. - However, the existence of "purely syntactic categories" in the above sense (i.e., in the sense that they are discoverable solely on the basis of syntactic evidence) is independent of whether they also happen to be meaningful or not. The whole thing is analogous to a proverbial Martian coming to Earth and undertaking to describe traffic signs. He will be able to give an account of the signs without knowing what meanings they stand for, by simply delimiting the basic graphic symbols and stating rules of their cooccurrence and arrangement on the sign boards. Once he learns what each signs means, he will discover that some of his classes arrived at on the basis of form patterns are meaningful while others (such as a line forming a frame around the signs) are not. b/ THE NECESSITY OF DECSRIBING FORM AND MEANING SEPARATELY Apart from the fact that one syntactic form can go with more than one alternative meaning and the same meaning can be expressed by alternative syntactic forms (and apart from other strictly structural evidence regarding mismatches between meaning structures and syntactic form), it seems that a description of syntactic constructions with form and meaning separately represented is necessary also as a supplement to performance-oriented descriptions of the sort Jon Aske referred to (where what is shown is that people learn and store forms along with meanings). This is because in order for a fact to become an explanandum, we must be able to see alternatives to it - ways in which things COULD be but are not. Thus, in order for us to be able to ask "WHY do people learn and store forms with meanings?", we have to realize that form and meaning are separate things and, in principle, they could be learned and stored separate. This logically possible but empirically non-occurrent option is what the description of syntactic constructions (as opposed to the description of how such constructions are processed by people) supplies when it shows meaning and syntactic form as separate entities. ************************************************************************ Edith A. Moravcsik Department of Linguistics University of Wisconsin-Milwaukee Milwaukee, WI 53201-0413 USA E-mail: edith at csd.uwm.edu Telephone: (414) 229-6794 /office/ (414) 332-0141 /home/ Fax: (414) 229-6258 From bates at CRL.UCSD.EDU Sat Jan 11 21:10:21 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Sat, 11 Jan 1997 13:10:21 -0800 Subject: form versus meaning Message-ID: In response to Edith Moravcsik's message, I think there is one more hidden assumption: that items either are, or are not, members of a syntactic class, and if they are members, they are members to the same degree. This is a classic approach to category membership, but its psychological validity is highly questionable, after more than two decades of research on prototypicality effects, fuzzy boundaries, ad hoc categories, context-dependent categorizations, and so on. As it turns out (vis-a-vis our Martian visitor), native speakers give highly variable judgments of syntactic well-formedness depending on the relative degree of "verbiness" of a verb, "nouniness" of a noun, "really-good-subjects" vs. "not-so-great subjects", and so on. It has proven very difficult to explain these variations without invoking something about the semantic content of the item in question, and/or its pragmatic history, frequency of use, etc. Now, one can always make the classic move (since 1957) of ascribing all those variations to "performance," salvaging one's faith in the purity of the underlying competence. But any research program that sets out to describe competence "first" and deal with performance "later" is going to run into trouble, because pure data that give us direct insights into competence are simply not available. That is precisely why we are all still having this argument. And while we are at it, I am puzzled by the suggestion that we should describe language "first" before any investigation of its biology can be carried out. Should physics be complete before we attempt chemistry? Must biological research stop until we have discovered all the relevant facts from chemistry? Why does linguistic description (field linguistics or self-induced grammaticality judgments) have priority over any other approach to the study of language? Is psycholinguistics a secondary science? Should research on aphasia come to a halt until we know exactly what it is that the aphasic patient has lost? It seems to me that we need all the constraints that we can get, and that all levels of inquiry into the nature of language are valid and mutually informative. The key is to be sure we know which level we are working on. For example, I believe that claims about innateness are biological claims, that require biological evidence. Proof that a given structure is (or is not) universal may be quite interesting and useful to someone who is investigating the biological underpinnings of human language abilities (genetic, or neural), but proofs of universality do not constitute ipso facto evidence for the innateness of some domain-specific linguistic structure, because that structure may be the inevitable but indirect by-product of some constraint (e.g.from information-processing) that is not, in itself, linguistic (e.g. as Matt Dryer points out, some kind of memory constraint). To untangle such problems, many different kinds of evidence will be required, and none of them should be granted priority over the others. -liz bates From kilroe at CSD.UWM.EDU Sat Jan 11 21:55:54 1997 From: kilroe at CSD.UWM.EDU (patricia kilroe) Date: Sat, 11 Jan 1997 15:55:54 -0600 Subject: meaning without form Message-ID: I am more interested in meaning than in form. Granted that form can be studied in disassociation from meaning. How, though, to describe formless meaning, or to make reference to meaning without getting snared by notions that presuppose form (prototypes, categories, attributes). P. Kilroe From cleirig at SPEECH.SU.OZ.AU Sat Jan 11 22:56:59 1997 From: cleirig at SPEECH.SU.OZ.AU (Chris Cleirigh) Date: Sun, 12 Jan 1997 09:56:59 +1100 Subject: amendment to liz bates' comment Message-ID: liz bates wrote: >And while we are at it, I am puzzled by the suggestion that we should >describe language "first" before any investigation of its biology can >be carried out. Should physics be complete before we attempt chemistry? A more congruent question would be: Should chemistry be complete before we attempt physics? On a hierarchy of emergent complexity, chemistry sits above physics just as linguistics sits above biology. A lot of "chemistry" was described before the discipline established its relations with physics. I believe it was called "alchemy". chris From dever at VERB.LINGUIST.PITT.EDU Sat Jan 11 23:02:59 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Sat, 11 Jan 1997 18:02:59 -0500 Subject: form versus meaning In-Reply-To: <199701112110.NAA11937@crl.UCSD.EDU> Message-ID: On Sat, 11 Jan 1997, Elizabeth Bates wrote: > In response to Edith Moravcsik's message, I think there is one more > hidden assumption: that items either are, or are not, members of a > syntactic class, and if they are members, they are members to the > same degree. This is a classic approach to category membership, but > its psychological validity is highly questionable, after more than > two decades of research on prototypicality effects, fuzzy boundaries, > ad hoc categories, context-dependent categorizations, and so on. The concern here is misplaced. The fact that some words, pointed out long ago by Ross, can behave as nouns or verbs or even more or less 'nouny' or 'verby' is neither something that formal linguists are unaware of or that they account for via performance. The question is whether their behavior in specific constructions can be accounted for by discrete, explicit constraints. The answer is, yes. No appeal to performance is needed. > > And while we are at it, I am puzzled by the suggestion that we should > describe language "first" before any investigation of its biology can > be carried out. Would you want to theorize on the evolution of the hand before you understood how hands work? This is a bizarre statement, Liz. Should physics be complete before we attempt chemistry? No, and why do you ask? Oh, I know, you believe that real science must reduce, chemistry to physics, biology to chemistry to physics, etc. That is an empirical hypothesis, not a self-evident fact as seems to be so commonly believed in San Diego. >Why does linguistic description (field linguistics > or self-induced grammaticality judgments) have priority over any other > approach to the study of language? Because core linguistics is after understanding of the subject matter - other approaches assume it (and if they neither understand it nor assume it, they are pointless). > Is psycholinguistics a secondary science? It is certainly derivative and not a primary field of inquiry. This is why it has linguistics built into it - it can only work to the degree that it understands language, at least in most cases I have looked at (processing, acquisition, reading theory, and discourse). > Should research on aphasia come to a halt until we know > exactly what it is that the aphasic patient has lost? The issue is not knowing what has been lost so much as being able to tell eventually what has been lost. Without a clear understanding of morphosyntax and phonology, yes, it would be premature to say much about aphasia. But we do know enough about language/grammar for aphasia research to take place and for mutual growth in both fields to take place. > It seems to > me that we need all the constraints that we can get, and that all > levels of inquiry into the nature of language are valid and > mutually informative. The key is to be sure we know which level > we are working on. You cannot know a level by fiat in science, only by having its constraints, structures, and units worked out. WIthout that, you will not know which level you are on. That said, I agree with you. > For example, I believe that claims about innateness > are biological claims, that require biological evidence. Proof that > a given structure is (or is not) universal may be quite interesting and > useful to someone who is investigating the biological underpinnings > of human language abilities (genetic, or neural), but proofs of > universality do not constitute ipso facto evidence for the innateness > of some domain-specific linguistic structure, because that > structure may be the inevitable but indirect by-product of > some constraint (e.g.from information-processing) that is not, > in itself, linguistic (e.g. as Matt Dryer points out, some kind > of memory constraint). To untangle such problems, many different > kinds of evidence will be required, and none of them should be > granted priority over the others. -liz bates But you are already at the biological level. At that level, yes, all we know about biology and language should be used. But prior to that level (conceptually, not chronologically) we need to understand langauge/grammar first (remember, Chomskian linguistics does not study language, it studies grammar). -- DLE From bralich at HAWAII.EDU Sat Jan 11 23:10:26 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Sat, 11 Jan 1997 13:10:26 -1000 Subject: form versus meaning Message-ID: At 11:10 AM 1/11/97 -1000, Elizabeth Bates wrote: > ... any research program that sets out >to describe competence "first" and deal with performance "later" is >going to run into trouble, because pure data that give us direct >insights into competence are simply not available. That is precisely >why we are all still having this argument. There will never be any pure data that give direct insights into competence any more than there will ever be a tool by which we can measure nounness or verbness. The nature of linguistics (except for phonology), like the nature of psychology (when it refers to the study of psyche--mental phenomena) is indirectly observable only. This, however, does not mean we cannot do research on linguistics or make statements about competence. I realize I am begging the rather obvious comment that empirical science must deal with directly observable phenomena only and therefor the study of competence and most of syntax for that matter is not science. But this is only if we follow the belief that the only relevant objects of scientific study are those which are directly observable and those of the indirectly observable variety, i.e. syntax and nounness, verbness, competence and so on are not science. However, if we accept that any observable phenomenon is a proper object of empirical invesitigation, then competence, syntax, nounness and verbness are all back on the table in their full splendor and we are once again able do discuss competence without embarassment. These statements may seem somewhat barbarous in the light of current thinking, but it seems to me if we really want to throw out items that are only indirectly observable (such as competence and syntax and nounness and verbness) then we must also throw sceinces favorite tool of measurement, mathematics, because there is nothing in it that is directly observable. So, whether or not there is direct evidence for competence, it is still a valid object of scientific investigation whether or not we deal with it first or later. Phil Bralich From TGIVON at OREGON.UOREGON.EDU Sun Jan 12 00:05:34 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Sat, 11 Jan 1997 16:05:34 -0800 Subject: response to Dan E. Message-ID: Actually, the DNA code has a very rich syntax, since there are thge "lexical node" level where triplets of nucleotides map directly to amino acids in the protein chain. Then there are -- in the sequence -- nucleotides (or segments of several nucleotides) that "govern" segments of the latter, by blocking, turning on/off, etc.. And there are a variety of "fillers" nucleotide segment whose function is much less clear and certainly more global. I am not an expert on the details, but I cited several papers on this issue inb my discussion of the degree of abstractness of grammar. Broadly speaking, this corresponds to the increase of abstrractness going "upward" from lexical nodes, to phrasa;l nodes, to clausal nodes, to complex-clause nodes. But of course, the analogy is far that complete. But broadly speaking, DNA is just as rhythmic-hierarchic (while given in a linear sequence) as mucic or language. One of the lousiest thing about both structuralists & functionalists in linguistics is that they don't understand the correlation between degree of abstaction of functional nodes & degree of abstraction of structural nodes that correlate with them. The distinction betweeb "more local" and "more global" functions is precisely what is involved (as in DAN structure...). And in my work on local vs. global coherence in discourse I have tried to point this out at the functional level. As for "licensing" people to practice biology, Dan, all I can do is observe the PROFOUND ignorance linguists seem to exhibit on the subject. Why I lost my license when I quit molecular biology in 1964, I keep reading in order to try & understand what is going on. It behooves others who want to "practice" on a regular basis to maybe do the same. Best, TG From bates at CRL.UCSD.EDU Sun Jan 12 02:19:00 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Sat, 11 Jan 1997 18:19:00 -0800 Subject: form versus meaning Message-ID: The question was not whether competence should be studied -- it should. And many sciences have to deal with indirect evidence. My complaint had to do with the idea that we can somehow study competence "first" and get to the other (what Dan Everett unabashedly calls) "derivative" phenomena later. i don't think we can. We have to do them together, or it will be impossible to understand the data that are supposed to provide insights into competence. Grammaticality judgments, for example, are a kind of performance, but we understand remarkably little about grammaticality judgment as a psychological process, and for that reason, the primary data base of generative linguistics is sometimes pretty shaky. It certainly isn't sound enough to stand alone, and it is not sound enough to serve as the core for any other avenue of study. It is one kind of evidence, but only one, and I am really amazed that anyone thinks that we should pursue that avenue before anything else is done. This is territorial imperialism. -liz bates From john at RESEARCH.HAIFA.AC.IL Sun Jan 12 08:51:11 1997 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Sun, 12 Jan 1997 10:51:11 +0200 Subject: What is this dispute anyway? Message-ID: I don't know about anyone else, but what I find most striking about the current discussion is that, after what seems like an almost interminable silence on funknet (several times in the past year I have thought that maybe I unsubscribed by accident), people have been aroused to involvement not by anything directly related to language but by an abstract ideological dispute where even the basic terms are interpreted in such a variety of ways that there is no hope of even achieving a mutual understanding let alone a resolution. Shouldn't we be a little bit concerned that people with different leanings interpret the same data in opposite ways according to what they regard as their view of language (e.g. Newmeyer vs. Dryer, Prince vs. Aske in the currect debate)? What this suggests to me is that maybe these views are essentially untestable and a matter of faith. If they were testable and people agreed on the data, assuming (as I do) that they are intelligent people and can follow logical arguments, how could they so consistently come to opposite conclusions about the theoretical implications of these data? If people's views on (however they construe) the `autonomy debate' are a matter of faith, I think this discussion is a waste of time, and if they are not a matter of faith, why doesn't anyone seem to be convinced by any data to change their position? Does anyone know of a single linguist who has changed his/her position either way (after completing graduate school, let's say) on the `autonomy debate'? Has anyone seen any data which has made them think 'aha, I used to hold position X on this debate, but now I think position Y is correct'? Will anyone publically admit to this? If so, I would like to hear what data caused such a conversion; at least we would have some evidence which *someone* found convincing enough at some point to admit they had been wrong. If there are few or no linguists who have experienced such a conversion (again, after, say, graduate school), I would like to suggest that we should consider the possibility that maybe this is because both sides of this argument have defined their hypotheses in such a way that no data can falsify them. The argument looks like basically a matter of faith (or a 'hunch', I believe was David Pesetsky's word), with the typical characteristics of such a dispute, in particular reference to poorly understood Higher Authorities (the Hard Sciences, in this case, Mathematics on one side, Biology on the other) to which some participants are claiming to have a Direct Line. 40 years ago, to the eternal discredit of linguistics, Chomsky managed to fool enough linguists into believing that he was a `Mathematician' that he made a research space for himself (he prudently gave up this claim after he had made this space and began to come into contact with real mathematicians who might publically call his bluff, in the unlikely event that they paid any attention to him at all), but the cost was that his type of linguists have been third-rate `Mathematicians' ever since then, complete outsiders in the humanities and social sciences (where they are institutionally located everywhere but MIT) and not taken seriously by the Real Sciences, and as a result they will be fighting for their institutional lives during the inevitable coming budget cuts. If the interest on funknet in the current autonomy debate as opposed to actual analysis of language is any indication, I am enormously concerned that some functional linguists are doing the same thing now, parading vague ideology/`theory' instead of doing real analysis, using Biology, Evolution, and The Brain instead of Mathematics. I personally am in this field because I like analyzing language, and I think it is pathetic to substitute vague speculations based upon third-hand and/or 30-year-old knowledge of other disciplines for doing actual analysis of language. We're linguists, guys, this is what we know about, this is our livelihood, and if we don't start acting like linguists, we aren't going to be anything at all soon. John Myhill From dryer at ACSU.BUFFALO.EDU Sun Jan 12 13:11:17 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Sun, 12 Jan 1997 08:11:17 -0500 Subject: More on autonomy Message-ID: Although the discussion of "autonomy" has faded somewhat the past 24 hours, there is a serious terminological confusion that I think remains unclarified. The term "autonomy" has been used in two different ways, and this has led to apparent disagreement when in fact there has just been misunderstanding. In an earlier message, I distinguished three senses, but since innateness is not an issue in the current discussion, let me narrow it down to the crucial two senses: (1) (strong) sense: One can explain syntactic facts in terms of syntactic notions (2) (weak) sense: Syntax/grammar exists. Syntax has, at least to a certain extent, a life of its own. What makes me acutely aware of these two senses is that I sometimes use the term "autonomy" one way, and sometimes the other way, largely as a function of how the person I'm talking to uses it. In particular, in various discussions with Tom Givon in recent years, I've used it the way he uses it, in the strong sense. And, in various discussions with Fritz Newmeyer in recent years, I've used it the way he uses it, in the weak sense. The real irony is that in recent years, both Tom and Fritz have been giving arguments in print for autonomy in the weak sense, except that Fritz calls it autonomy and Tom doesn't. But whatever you call it, it is clear when you look closely at what they've each written about it, that they are arguing for the same thing. They're both arguing against those functionalists who deny, or who seem to deny, that syntax/grammar even exists. Unfortunately, as I have pointed out to Fritz on occasion, he sometimes strays back and forth between the two senses (presumably because he believes in autonomy in both senses), and while what he is usually arguing for is simply autonomy in the weak sense, he sometimes either argues for autonomy in the strong sense (as in his recent response to my message), or treats an argument for autonomy in the weak sense as if it were an argument for autonomy in the strong sense. But this should not obscure the extent to which Fritz and Tom have been arguing for the same thing. Furthermore, since Fritz invoked Jack Hawkins, saying "Hawkins is absolutely explicit that parsing explanations are compatible with autonomy", it must be stressed that Hawkins (like Fritz) is using "autonomy" in the WEAK sense, and, as far as I know from Hawkins' writings and my own discussions with him, Hawkins does not believe in autonomy in the strong sense. The general moral is: let's not get hung up on the form (the expression "autonomy of syntax") but let's look at the function (i.e. meaning) to which this expression is being put. Matthew From dever at VERB.LINGUIST.PITT.EDU Sun Jan 12 14:16:03 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Sun, 12 Jan 1997 09:16:03 -0500 Subject: form versus meaning In-Reply-To: <199701120219.SAA12991@crl.UCSD.EDU> Message-ID: Territorial imperialism? I like the ring of that. That pretty well describes what goes on at most universities across the country as Chairs argue for resources from Deans. But that is another matter. Here is all I am saying: To make the statement in (i) presupposes (ii) (i) 'I have discovered the historical/biological/psychological/sociological basis for X' (ii) I (or somebody I am talking to) understands X. If that is territorial imperialism, then my mental lexicon will have to open new files for those terms. -- Dan P.S. Tom is right when he says that linguists are profounding ignorant about biology. So are biologists when they are honest. But as George Miller said in Lg. a few years back, we would all benefit from knowing more about everything. So that is a truism. And I know that Tom had graduate training in microbiology. Linguists are also ignorant of theology (my other training), but I am not going to ask them to consider theological arguments - but cf. Mark Baker's chapter in The Polysynthesis Parameter. ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From jaske at ABACUS.BATES.EDU Sun Jan 12 15:47:30 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Sun, 12 Jan 1997 10:47:30 -0500 Subject: What is this dispute anyway? Message-ID: I was distracted for one day and now there is so much I would like to reply to. I will keep it brief though. Or you can just press the D key right now. To Patricia K.: I don't see how you can study "meaning" in the abstract any more than you can study "form" in the abstract, other than in very limited ways. Through typological comparison we can get a sense of what meanings and functions are important for humans to communicate and get expressed in language, more or less frequently, both lexically and grammatically, and how the different instantiations vary. But I don't believe those meanings and functions exist in an abstract (least-common denominator) form. So when you study meaning you have to deal with form-meaning units, i.e. lexemes and constructions. There is just no other way about it. To Edith M.: I just don't see why we would want to restrict our linguistic analysis to the more formal aspects of constructions and ignore their function/meaning pole, their history, and so on. I haven't seen the point since my undergraduate days 15 years ago. I remember going through all those transformations and wondering why things transform themselves into other things, e.g. what was the point of a passive or a dative-shift construction, and why some languages had those alternative constructions and others didn't. To me, studying the formal aspects of such constructions without looking at what they are made for, how they are made, etc. *as a matter of principle*, just doesn't make sense. I came to the early realization that these constructions should not be studied as merely formal operations. These constructions exist for a purpose, and their form reflects the function that they arose for in the first place, even if they have picked up additional bagage along the way. And to me that is the most interesting part of analyzing language/grammar. I realize that our present inability to predict the use of, for example, even the English passive, makes a lot of people skeptical that the study of meaning/function along with form is a realizeable goal. And I realize that until we get a firm hold of the meaning/function of particular constructions, we may need to play around with formal constraints (eg constraints on 'extraction') and correlations found on those constructions. But along with formal constraints we need to analyze semantic and other functional constraints and correlations. We just can't separate the two. Separating the two as a matter of principle, to see how far we can go, or some other such reason, to me is unconscionable. David P. tells us that now they use "functional" categories in their grammatical theory. Well, I'm glad you finally figured out that functions are important to linguistic analysis, even if it just a handful of the wrong kind of functions. But, as you said, your functions are nothing like our functions. Your functions are abstract, pristine, pure, and perhaps even innate. Your language systems are clockwork mechanisms that have a pure form where things move up and down, branching direction is fully consistent, etc. Then, for some strange reason, things get muddled on the surface, maybe a particular construction decides to branch in an inconsistent way and "scrambling rules" mess up an otherwise neat and underlying perfect system. Well, I just do not believe that that is what languages are like. Languages are not underlyingly pristine and then they get messy. They are just plain messy to begin with. And that makes sense once you realize how it is they got that way. The functions of language instantiated in grammar, which you reduce to a handful of abstract formal-functional categories, do not come to us in abstract and pristine form, and they are a lot more than that handful, and they interact with each other in much more complex and richer ways than you can imagine, ... Well, I'll leave it at that. To John M.: I understand your frustration and your points are well taken. Still, I don't see any harm done in these periodic outbursts on linguistics lists. Anything which means contact between members of different linguistics schools, even if it is in a dark room, is welcome, as I see it. I too would like to see more data oriented discussions on this list (any takers?), and I would like to figure out why they don't take place, but that is no reason to stop the other type of discussion. If they come up, it must be for a reason. It sounds to me like you have it all figured out, but there are a lot of people out there who don't (students, for instance). And if the problems are political, that needs to be aired out too. After all, you also seem to have very strong feelings about some of these things and decided to air them out, rather than stop your posting somewhere around the middle. Anyway, I've used up my alloted space for about a week, so I'm stopping here. Do keep it coming. And let's keep it civil. Best, Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Nolako egurra, halako sua "Such as is the wood, thus will be the fire." From pesetsk at MIT.EDU Sun Jan 12 17:05:17 1997 From: pesetsk at MIT.EDU (David Pesetsky) Date: Sun, 12 Jan 1997 12:05:17 EST Subject: What is this dispute anyway? In-Reply-To: <32D90792.563C@abacus.bates.edu> Message-ID: At 3:47 PM -0000 1/12/97, Jon Aske wrote: > David P. tells us that now they use "functional" categories in their > grammatical theory. Well, I'm glad you finally figured out that > functions are important to linguistic analysis, even if it just a > handful of the wrong kind of functions. But, as you said, your functions > are nothing like our functions. Your functions are abstract, pristine, > pure, and perhaps even innate. Your language systems are clockwork > mechanisms that have a pure form where things move up and down, > branching direction is fully consistent, etc. Then, for some strange > reason, things get muddled on the surface, maybe a particular > construction decides to branch in an inconsistent way and "scrambling > rules" mess up an otherwise neat and underlying perfect system. > Well, I just do not believe that that is what languages are like. > Languages are not underlyingly pristine and then they get messy. Do you really think that's what I said? -David Pesetsky ************************************************************************* Prof. David Pesetsky, Dept. of Linguistics and Philosophy 20D-219 MIT, Cambridge, MA 02139 USA (617) 253-0957 office (617) 253-5017 fax http://web.mit.edu/linguistics/www/pesetsky.html From lmenn at CLIPR.COLORADO.EDU Sun Jan 12 17:30:18 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Sun, 12 Jan 1997 10:30:18 -0700 Subject: amendment to liz bates' comment In-Reply-To: <199701112256.JAA21823@fortis.speech.su.oz.au> Message-ID: That's a totally unwarranted slap. You think Lavoisier used Galileo's results to discover oxygen? Yes, it's true that we dont' know what the primitives of brain function are yet; but we still have to study that function with all available tools. No matter how much a neurophysiologist knows about the brain, she can't design an experiment to look at on-line sentence processing without the collaboration of linguists and psycholinguists. On Sun, 12 Jan 1997, Chris Cleirigh wrote: > liz bates wrote: > > >And while we are at it, I am puzzled by the suggestion that we should > >describe language "first" before any investigation of its biology can > >be carried out. Should physics be complete before we attempt chemistry? > > A more congruent question would be: > > Should chemistry be complete before we attempt physics? > > On a hierarchy of emergent complexity, chemistry sits above physics > just as linguistics sits above biology. > > A lot of "chemistry" was described before the discipline established > its relations with physics. I believe it was called "alchemy". > > chris > From lmenn at CLIPR.COLORADO.EDU Sun Jan 12 17:48:37 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Sun, 12 Jan 1997 10:48:37 -0700 Subject: form versus meaning In-Reply-To: <2.2.16.19970110131232.2df73be6@pop-server.hawaii.edu> Message-ID: The way to treat concepts that are not directly accessible (e.g. competence, nouniness, topic, empathy, foreground...) is as 'intermediate constructs'; one tests them by varying some factor(s) that can be manipulated or measured (e.g.how do people rate the attractiveness or humanness of the proposed empathic focus?) and looking at some measurable outcome (e.g. how often do people make the proposed empathic focus the first referent mentioned in a description of an event when that referent is the undergoer?). An intermediate construct gets validated if it helps in making such predictions; it quietly vanishes away, like the Boojum or ether or phlogiston, if it stops being useful. And concepts may be useful for one purpose long after they have stopped being useful for others. I'm very skeptical about 'competence' existing apart from the performances in which it is demonstrated, but it would be absurd to write a reference grammar or a dictionary in terms of 'performance' alone. Lise Menn On Sat, 11 Jan 1997, Philip A. Bralich, Ph.D. wrote: > At 11:10 AM 1/11/97 -1000, Elizabeth Bates wrote: > > ... any research program that sets out > >to describe competence "first" and deal with performance "later" is > >going to run into trouble, because pure data that give us direct > >insights into competence are simply not available. That is precisely > >why we are all still having this argument. > > There will never be any pure data that give direct insights into competence > any more than there will ever be a tool by which we can measure nounness or > verbness. The nature of linguistics (except for phonology), like the nature > of psychology (when it refers to the study of psyche--mental phenomena) is > indirectly observable only. This, however, does not mean we cannot do > research on linguistics or make statements about competence. > > I realize I am begging the rather obvious comment that empirical science > must deal with directly observable phenomena only and therefor the study of > competence and most of syntax for that matter is not science. But this is > only if we follow the belief that the only relevant objects of scientific > study are those which are directly observable and those of the indirectly > observable variety, i.e. syntax and nounness, verbness, competence and so on > are not science. However, if we accept that any observable phenomenon is a > proper object of empirical invesitigation, then competence, syntax, nounness > and verbness are all back on the table in their full splendor and we are > once again able do discuss competence without embarassment. > > These statements may seem somewhat barbarous in the light of current > thinking, but it seems to me if we really want to throw out items that are > only indirectly observable > (such as competence and syntax and nounness and verbness) then we must also > throw sceinces favorite tool of measurement, mathematics, because there is > nothing in it that is directly observable. So, whether or not there is > direct evidence for competence, it is still a valid object of scientific > investigation whether or not we > deal with it first or later. > > Phil Bralich > From pesetsk at MIT.EDU Sun Jan 12 18:02:34 1997 From: pesetsk at MIT.EDU (David Pesetsky) Date: Sun, 12 Jan 1997 13:02:34 EST Subject: What is this dispute anyway? In-Reply-To: Message-ID: I think one can improve on Myhill's diagnosis. There is widespread agreement in principle on the range of possible explanations a linguistic phenomenon might receive. Everyone agrees that facts about language might be due to a specific property of language, to properties that language shares with other functions, to historical factors, and to interactions among any of the above. The central disagreement seems to concern the default explanations we accord to specific linguistic phenomena that are not well understood (i.e. most of them). Suppose we find a fact of language which we can characterize fairly well (but not completely) in language-internal terms. Do we assume that the fact as a whole is language-specific until proven guilty of non-specificity? Or should it be the other way around? Our personal answers to these questions do indeed reflect our hunches and wishes. But we also recognize, presumably, that our goal is to move beyond hunches and wishes to discover the truth of the matter. Being human beings, however, we are easily distracted. We shift all too easily from research-generating propositions like: 1. "I think the explanation lies (partly, entirely) in area X." 2. "As an expert in area X, I can best contribute to research by investigating possible explanations in area X." to research-stifling propositions like "I think the explanation lies in area X because..." 3. "...everything really interesting is in area X." 4. "...area X contains more stuff than any other area, and therefore is better." 5. "...area X is the essence of language." 6. "...no one would ever work outside area X unless they had a major character flaw." 7. "... [New Yorker (latest issue, 1/13/97), cartoon on p.52]." But anyone with a will can separate these distractions from our real business. I agree that there is no general, falsifiable "autonomy thesis" that separates us. But, at the same time, we're not just floating in a sea of nonsense either. Our hunches and prejudices (though they are not in themselves testable hypotheses) can and do suggest competing explanations for actual facts. Discussions of these alternatives can and do change minds (mine, for instance). Myhill writes: > Shouldn't we be a little bit concerned that people with > different leanings interpret the same data in opposite ways according to > what they regard as their view of language.[...] I say "No". I think this situation is quite fine. The problems arise *after* we've offered our varying interpretations of the data. Do we defend our interpretation with specious propositions like 3-7? Or do we try to discover the truth? Since every now and then the second path is taken, I have more hope for the field than Myhill does. -David Pesetsky P.S. I'll try not to bother this list any more. ************************************************************************* Prof. David Pesetsky, Dept. of Linguistics and Philosophy 20D-219 MIT, Cambridge, MA 02139 USA (617) 253-0957 office (617) 253-5017 fax http://web.mit.edu/linguistics/www/pesetsky.html From spikeg at OWLNET.RICE.EDU Sun Jan 12 21:20:06 1997 From: spikeg at OWLNET.RICE.EDU (Spike Gildea) Date: Sun, 12 Jan 1997 15:20:06 -0600 Subject: (fwd) Re: back and forth Message-ID: >---------- Forwarded message ---------- >Date: Fri, 10 Jan 1997 23:14:34 +0200 (IST) >From: ariel mira >To: Tom Givon >Cc: Multiple recipients of list FUNKNET >Subject: Re: back and forth > >Dear Tom, > >I very much agree with your position and Wally Chafe's. Chomskyans are >keen on knocking down functionalism by attacking some imaginary strawman, >such as: If functionally motivated, then 100% motivated. And I think >they've done a very good job among us functionalists, so that we've come >to believe that there are some among us (never me, of course) who actually >believe that 100% of language is transparently and synchronically >motivated. Yes, there was Garcia in the 1970's. But 20 years have passed >since then, and in my attempt to find this extreme position in writing, I >have not come up with anything. All I can find is people like you and Paul >Hopper saying they are NOT adopting the extreme position. > >So, if there is ANYBODY holding that extreme position, please speak >up/give references. If there's nobody holding that position, could we stop >arguing against it? > >Shabbat Shalom, > >Mira From jaske at ABACUS.BATES.EDU Sun Jan 12 21:41:59 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Sun, 12 Jan 1997 16:41:59 -0500 Subject: What is this dispute anyway? Message-ID: David Pesetsky wrote: > > Do you really think that's what I said? David, I'm sorry if I put words into your mouth. I was going by my interpretation (corroborated by many others) of what people in your school, not necessarily you yourself, have been saying for the last few decades, at least until the last time I checked. Perhaps my interpretation was erroneous. If so, I am quite willing to stand corrected. I think that that is what this discussion (I don't dare call it "dispute") is all about. I feel, and I'm sure many others do too, that we need a lot more communication in our field. It may turn out that we agree on more things than we ever thought we did. Although I am personally a bit skeptical about this, I too am after the Truth and not after winning partisan battles. So let's talk. Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Nolako egurra, halako sua "Such as is the wood, thus will be the fire." From jaske at ABACUS.BATES.EDU Sun Jan 12 22:31:22 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Sun, 12 Jan 1997 17:31:22 -0500 Subject: What is this dispute anyway? Message-ID: Well, I think that we can improve on David's diagnosis. I would say that the general belief among functionalists is that formalists adhere to proposition #3, namely that "...everything really interesting is in area X", as opposed to the proposition "...everything really interesting is in areas A-Z." Most functionalists probably also believe (correct me if I'm wrong) that many formalists believe in a modified version of proposition #4 ("...area X contains *LESS* stuff than any other area, and therefore is better."). Same thing about proposition #5 (see original quote below). (I'll skip proposition #6, although things could be said about that too. Unfortunately personal attacks have come from both sides at different times). Perhaps we're just wrong. If so, there is a very big misunderstanding here and we definitely should get it corrected as soon as possible. That's what we're here for. Best, Jon David Pesetsky wrote: ... > 1. "I think the explanation lies (partly, entirely) in area X." > > 2. "As an expert in area X, I can best contribute to research by > investigating possible explanations in area X." > > to research-stifling propositions like "I think the explanation lies in > area X because..." > > 3. "...everything really interesting is in area X." > > 4. "...area X contains more stuff than any other area, and therefore > is better." > > 5. "...area X is the essence of language." > > 6. "...no one would ever work outside area X unless they > had a major character flaw." > > 7. "... [New Yorker (latest issue, 1/13/97), > cartoon on p.52]." > > But anyone with a will can separate these distractions from our real business. > > I agree that there is no general, falsifiable "autonomy thesis" that > separates us. But, at the same time, we're not just floating in a sea of > nonsense either. Our hunches and prejudices (though they are not in > themselves testable hypotheses) can and do suggest competing explanations > for actual facts. Discussions of these alternatives can and do change minds > (mine, for instance). Myhill writes: > > > Shouldn't we be a little bit concerned that people with > > different leanings interpret the same data in opposite ways according to > > what they regard as their view of language.[...] > > I say "No". I think this situation is quite fine. The problems arise > *after* we've offered our varying interpretations of the data. Do we defend > our interpretation with specious propositions like 3-7? Or do we try to > discover the truth? Since every now and then the second path is taken, I > have more hope for the field than Myhill does. > > -David Pesetsky > > P.S. I'll try not to bother this list any more. > > ************************************************************************* > Prof. David Pesetsky, Dept. of Linguistics and Philosophy > 20D-219 MIT, Cambridge, MA 02139 USA > (617) 253-0957 office (617) 253-5017 fax > http://web.mit.edu/linguistics/www/pesetsky.html -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Nolako egurra, halako sua "Such as is the wood, thus will be the fire." From bates at CRL.UCSD.EDU Mon Jan 13 02:24:28 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Sun, 12 Jan 1997 18:24:28 -0800 Subject: form versus meaning Message-ID: Okay, instead of "territorial imperialism", how about "disciplinary hegemony"? In his discussion of the "derivative" status of some fields and the "core" status of linguistics, Dan suggests that the very name of fields with titles like "The sociology of X" imply that the core field is the one called "X". There may be sociological/historical reasons why such terms evolve, but I don't think the argument works (or should work) for the many fields that study language. Linguists study language. So do psycholinguists, neurolinguists, sociolinguists, etc. In this case, "X" is Language. Field linguists and theoretical linguists who use naturally occurring text or grammaticality judgments are studying language, by one set of methods. Psycholinguists are using yet another set of methods to study language. And so on. No one, in my view, is any closer to "the thing itself". what we are talking about here are simply different perspectives on the same problem. Of course there ARE branches of psycholinguistics, aphasiology, etc., in which investigators start with the products of theoretical linguistics and then set out to use them against a particular kind of data. That's one approach (e.g. the search for the "psychological reality of transformations" in the 1960's). It's not the only approach. It is certainly not the DEFINITION of psycholinguistics, neurolinguistics, etc. By insisting that linguistics has priority over other fields that study language, Dan is doing all of us (including the linguists) a disservice. What he really means is that certain METHODS have priority -- because that is all that really, at base, separates linguistics from psycholinguistics, neurolinguistics, etc. -liz bates From fjn at U.WASHINGTON.EDU Mon Jan 13 03:33:10 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Sun, 12 Jan 1997 19:33:10 -0800 Subject: A couple final remarks In-Reply-To: Message-ID: Just a couple final comments, before I return to lurking on the list. First, Mark Durie writes and asks: >So the autonomy hypothesis in Fritz's sense (divorced from any >consideration of innateness) is structuralism. Or have I completely >misunderstood? I obviously have not made myself clear enough. Saussure's position ('structuralism') is that the set of form-meaning pairings (the set of signs) is autonomous. I do accept that, but also something more, namely, the autonomy of syntax. In that hypothesis, the set of form-form interrelationships *also* forms a discrete system, independently of the meanings of those forms or the uses to which they are put. Many accept the former, but not the latter. And Matthew Dryer remarks: > Fritz' response to my comments reflects a substantive difference > that is fundamental to differences between functionalist and > "formalist" approaches. Fritz says > > >>The sensitivity of speakers to the abstract (formal, structural) > >>notion 'Phrase structure branching-direction', a notion that > >>doesn't (as Dryer shows) correlate perfectly with semantic > >>notions such as 'head-dependent' supports the idea that speakers > >>mentally represent abstract phrase-structure relations > >>INDEPENDENTLY of their semantic and functional > >>'implementations'. > > If by this Fritz means that speakers represent the fact that > different structures in their language involve the same branching > direction, then this doesn't follow at all. My hypothesis is that > languages with both left and right branching result in structures > with mixed branching that in language USE are slightly more > difficult to process, and that over the millenia, this has > influenced language change so that one finds crosslinguistic > patterns reflecting a tendency toward more consistent direction of > branching. But this does not entail that the fact that different > structures in the language employ the same direction of branching is > itself represented by speakers. Rather, it simply means that > speaking a language in which structures branch in the same direction > will result in slightly fewer instances of individuals' failing to > extract the intended meaning of an utterance. No, in fact I don't assume that speakers mentally represent the fact that different structures in their language involve the same branching direction (though I don't reject a priori the possibility that they might do so). It's the mere fact of representing branching phrase structure at all INDEPENDENTLY OF THE MEANINGS / FUNCTIONS encoded / carried out by that structure that supports the autonomy of syntax. That's all from me. Best wishes to all, --fritz From dick at LINGUISTICS.UCL.AC.UK Mon Jan 13 09:42:35 1997 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Mon, 13 Jan 1997 09:42:35 +0000 Subject: back and forth Message-ID: I think David Pesetstky is right. So-called `formalists' have been extremely enthusiastic recently about trying to predict syntactic form from semantic function. An even clearer example of this, which he doesn't mention, is in the area of `argument structure' (or, alternatively, `theta roles' or some other kind of semi-semantic structure) as a predictor of syntactic roles (subject, object, etc). A lot of `formalists' have suggested an extremely close connection between the two. I think Chomsky has rejected total predictability (and I think I'd agree with him), but I also think that, like many of us on this list, he thinks it's a good idea to look for tight correlations between semantic `function' and syntactic `form'. So what's the argument about? At 22:11 10/01/97 -0500, you wrote: >At 12:24 PM -0800 1/10/97, Tom Givon wrote: > > >> Seems to me I hear another argument between the partly deaf. Everybody >> concede that a correlation exists between grammatical structures and >> semantic and/or pragmatic functions. But two extremist groups seem to >> draw rather stark conclusions from the fact that the correlation is >> less-than-fully-perfect. The "autonomy" people seem to reason: >> "If less than 100% perfect correlation, >> therefore no correlation (i.e. 'liberated' structure)" > >Do "autonomy people" really reason like this? I don't think so. In fact, >I think it's just the opposite. > >Isn't most of the research by "autonomy people" actually devoted to the >hunch that there is a nearly *perfect* 100% correlation between grammatical >structure and semantic/pragmatic function -- and that "less than 100%" >correlations are actually 100% correlations obscured by other factors? > >- What, after all, is the functional category boom about, if not a >(possibly overenthusiastic) attempt to investigate a 100% correlation >hypothesis for properties like tense, agreement, topic, focus, and so on? > >- What was the motivation for the hypothesis of "covert movement" (LF >movement), if not the hunch that the correlation between grammatical and >semantic/pragmatic structure is tighter than it appears? > >- Why all the effort expended on the unaccusative hypothesis, the >Universal Alignment Hypothesis, and Baker's UTAH, if not in service of the >hypothesis that non-correlations between semantic function and grammatical >form are only superficial? > >I think one might make the case that formalist "autonomy people" are among >the most faithful functionalists. > >What divides linguists in this debate is not, I suspect, their faith in >robust form-function correlations, but rather their hunches about the >repertoire of factors that *obscure* these correlations. That's where many >of us really do disagree with each other. > >-David Pesetsky > > >************************************************************************* >Prof. David Pesetsky, Dept. of Linguistics and Philosophy >20D-219 MIT, Cambridge, MA 02139 USA >(617) 253-0957 office (617) 253-5017 fax >http://web.mit.edu/linguistics/www/pesetsky.html > > Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From john at RESEARCH.HAIFA.AC.IL Mon Jan 13 08:42:05 1997 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Mon, 13 Jan 1997 10:42:05 +0200 Subject: What is this dispute anyway? Message-ID: Oh well, I guess I better say something else. To Talmy G.: I am not advocating security through ignorance. For more than 10 years I wrote functionally oriented articles with very large databases, detailed text counts, statistical significance tests for all my claims, multivariate statistical analysis, etc. I tried my best to bring normal scientific methodology to functional linguistics. I finally gave up on the statistics (to functionalist audiences) because I realized that not only did no one evidently care, but when I even brought up data of this sort, my audiences would get glassy-eyed and lose interest in the whole paper. I consider this is a more serious effort to incorporate scientific methods into functional linguistics than reading popular interpretations of research in the hard sciences and imagining how they might apply to linguistics. I've been reading your articles for more than 15 years now, Talmy, I've seen a lot of numbers, but I have yet to see you do even a single simple statistical significance test, even a chi-square, let alone a regression analysis; your arguments about knowledge and ignorance of science would be more convincing if you yourself actively showed a little scientific knowledge here. Incidentally, it's 'yosif da`at yosif max'ov', not 'mosif da`at mosif max'ov'. To David P.: You write: Discussions of these alternatives can and do change minds (mine, for instance). You've changed your mind about the autonomy thesis? You used to not believe (didn't used to believe?) in autonomous syntax? After you got out of graduate school? Did you put this in print anywhere? And you got a job at MIT? Am I understanding you correctly? Please clarify. (I'm not being facetious, I really am interested in this) In reply to my question: Shouldn't we be a little bit concerned that people with different leanings interpret the same data in opposite ways according to what they regard as their view of language? You write: I say "No". I think this situation is quite fine. The problems arise *after* we've offered our varying interpretations of the data. Do we defend our interpretation with specious propositions like 3-7? Or do we try to discover the truth? I agree with you in principle, but unfortunately that is not the tone the discussion (such as it is) has taken. To take the most blatant example, Chomsky's favorite 'defense' of whatever approach he feels like pursuing at the moment has always been that it is 'interesting,' (your specious proposition #3), e.g. 'Knowledge of language' pg. 5: 'During the past 5-6 years, these efforts have converged in a somewhat unexpected way, yielding a rather different conception of the nature of language and its mental representation, one that offers interesting answers to a range of empirical questions and opens a variety of new ones to inquiry while suggesting a rethinking of the character of others. This is what accounts for an unmistakeable sense of energy and anticipation...' Similarly pg. 4: `This (research program) should not be particularly controversial, since it merely expresses an interest in certain problems...' Such examples could be multiplied many times over (anyone have Chomsky's writings in a text base? search for 'interest'). Having justified choosing a particular approach because it is 'interesting' and gives 'an unmistakeable sense of energy and anticipation,' while other approaches evidently do not, NC can now devote the rest of his book to working out the fine points of this approach. This is particularly significant, and worrying, because the great majority of Chomsky's followers appear to be similarly basing their choice of approach on what Chomsky finds 'interesting' as well, to judge by the general lack of serious effort to give more convincing arguments for this approach. I assume that you (David) yourself are thinking something similar about functionalists, so this appears to be a general property of the field (though I applaud Matthew Dryer's sincere efforts to try to get things straightened out). This is what I am concerned about. John Myhill From dever at VERB.LINGUIST.PITT.EDU Mon Jan 13 12:07:27 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 13 Jan 1997 07:07:27 -0500 Subject: form versus meaning In-Reply-To: <199701130224.SAA18908@crl.UCSD.EDU> Message-ID: I do not believe that a field is defined by its methodology but by the questions that it asks. The core questions of linguistics have to do with phonology, syntax, and morphology (and that will get about 1% agreement on this list). Semantic, discourse, diachronic and other questions are crucial, of course, but all rely to a greater or lesser degree to the answers on morphology, syntax, and phonology, as do questions asked by psycholinguistics. Psycholinguistics is secondary for that reason. -- DLE ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From dever at VERB.LINGUIST.PITT.EDU Mon Jan 13 13:34:47 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 13 Jan 1997 08:34:47 -0500 Subject: What is this dispute anyway? In-Reply-To: Message-ID: John, You are quite right to be upset with linguists who want to wear the 'mantle of science' but do not have the same rigor in their research that other scientists would expect. Many of us have probably been guilty of this. And I think that you are right that Chomsky can be manipulative in his use of words like 'interest' and that his writings are often unquestioningly followed by many syntacticians to the detriment of the field. But discussions like those on this list can change minds. I have read more functional linguistics as a result of being impressed with answers and comments I have read on this list and I have felt the need to answer functional counteranalyses in my own research because I have realized that such analyses are indeed intuitively appealing and well thought out, so that if a particular formal analysis I am proposing is going to be fully convincing to me I must grapple with the functional issues. I spend some time doing this at different places in my new book, _Why there are no clitics_. I also have changed my attitudes positively towards a number of researchers whose work I might have ignored in the past, because of the reasonableness of their replies on this list. Maybe I should have been able to figure out the reasonableness and relevance of functionalist alternatives on my own, without this list, but this list has helped quite a bit. There are still a number of theoretical positions that I hold even more strongly as a result of reading this list (because I am more convinced than ever that the alternatives proposed are weak), but that too has been worthwhile. Lists like this provide a forum for discussing our basic assumptions in ways that refereed publications do not, for good reasons. So don't get too upset with us for discussing these issues of 'ideology' instead of empirical work. It can be beneficial. -- DLE ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From dquesada at CHASS.UTORONTO.CA Mon Jan 13 13:45:53 1997 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Mon, 13 Jan 1997 08:45:53 -0500 Subject: form versus meaning In-Reply-To: Message-ID: On Mon, 13 Jan 1997, Daniel L. Everett wrote: > The core questions of linguistics have to do with > phonology, syntax, and morphology ***(and that will get about 1% agreement*** on this list). Semantic, discourse, diachronic and other questions are > crucial, of course, but all rely to a greater or lesser degree to the > answers on morphology, syntax, and phonology, as do questions asked by > psycholinguistics. [stars mine, DQ] Can you lay the egg? Are you implying that the linguists in this list do not do syntax? Better, are you imlying that syntax is just the sort of closed set game formalists practice? To J. Mayhill: I "was linguistically raised a Chomskyan". During the passage from M.A. to Ph.D. it simply lost its attraction to work on something that by just a matter of faith had to be understood as being real. But I won't go into details about my "heresy". Suffice it to say that there are many like me. Is it also the other way? Diego From dever at VERB.LINGUIST.PITT.EDU Mon Jan 13 14:08:32 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 13 Jan 1997 09:08:32 -0500 Subject: form versus meaning In-Reply-To: Message-ID: On Mon, 13 Jan 1997, Diego Quesada wrote: > > > On Mon, 13 Jan 1997, Daniel L. Everett wrote: > Can you lay the egg? I am afraid that the literary allusion here escapes me. > Are you implying that the linguists in this list do > not do syntax? Better, are you imlying that syntax is just the sort of > closed set game formalists practice? Nope. I am not implying that at all. -DLE From Carl.Mills at UC.EDU Mon Jan 13 17:51:08 1997 From: Carl.Mills at UC.EDU (Carl.Mills at UC.EDU) Date: Mon, 13 Jan 1997 12:51:08 -0500 Subject: Carson Schutze's book Message-ID: Dan Everett writes: "Carson Schutze has recently published a book on methodology (basically for formal linguists), which hopefully will contribute to change in formal theoretic work." Aside from reinventing the wheel MIT fashion and ignoring some earlier work that some of us semi-generative/not-quite-functionalist folks have done on methodology, Schutze's book is pretty good. While it covers some matters that others, e.g., Fritz Newmeyer and Tom Givon, have treated, Schutze's book is encouraging to those of us who have a sneaking hunch that looking closely at what we study and what we actually look at when we are studying it are not wasted activities. Best Carl Mills From bates at CRL.UCSD.EDU Mon Jan 13 18:41:54 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Mon, 13 Jan 1997 10:41:54 -0800 Subject: form versus meaning Message-ID: My computer went down this morning in the middle of a message, which seems to have gone out anyway. On the questionable assumption that Half a Harangue is not better than none, I append the full text, with apologies for troubling everyone 1.5 times to make the same points. -liz In response to Dan Everett's message about the priority of linguistics, my research as a psycholinguist and developmentalist focuses primarily on morphology, syntax and the lexicon. In the psycholinguistic work, I am asking about how these form-meaning mappings (and form-form mappings) are processed in real time, and the results have what I believe to be crucial implications for our understanding of HOW THEY ARE REPRESENTED IN THE MIND. How is that different, other than by methodology, with the work that is conducted in "linguistics proper"? I simply reject Dan's premise that linguists are looking directly at language while the rest of us are squinting sideways. And now let's talk for just a moment more about the use of grammaticality judgment as a way of staring directly at language....I'd like to make four quick points. The first two are NOT examples of psychology trying to have hegemony of linguistics. Rather, they pertain to strictures on methodology that hold (or should hold) in every social science, as well as agriculture, industry, anywhere where the investigator wants to draw inferences that generalize beyond the sample in question. The second two are more psychological in nature, but I believe that they have implications for the most central questions about structure. 1. Representativeness of the data base. If you want to know how your corn crop is going to fare, it is widely acknowledged in agriculture that it would be unwise to look at the four plants right outside your window (assuming this isn't your whole crop....). A truism of all empirical science is that the data base from which we draw our generalizations should be representative (either through random sampling, or careful construction of the data base a priori) of the population to which we hope to generalize. In research on language, this constraint holds at two levels: in the human subjects that we select (e.g. the people who are giving the grammaticality judgments) and in the linguistic materials we choose to look at (e.g. the sentences selected/constructed to elicit grammaticality judgments). These strictures are typically ignored, as best I can tell, in the day-to-day work of theoretical linguists who rely on grammaticality judgments as their primary data base. In fact, we have known since the 1970's that the grammaticality judgments made by linguists do not correlate very well with the judgments made by naive native speakers. These differences have been explained away by stating that naive native speakers don't really know what they are doing, only linguists know how to strip away the irrelevant semantic, pragmatic or performance facts and focus their judgments on the structures they really care about. Which, in turn, presupposes a theory of the boundary conditions on those facts -- introducing, I should think, a certain circularity into the relationship between theory and data. In any case, by using a very restricted set of judges, the assumption that one can generalize to 'native speaker competence" may be at risk. Instead of a theory of grammar, we may have a theory of grammar in Building 10. At this point I should stress that ALL the sciences studying language have problems of generalizability. In psycholinguistics, we want to generalize to all normal adult native speakers of the language in question, but most of our data come from middle class college sophomores. In developmental psycholinguistics, we want to generalize to all normal children who are acquiring this language, but are usually stuck with data from those middle class children willing to sit through our experiments, which means that we may have a theory of language in the docile child....In short, I am not proposing that only linguists have this problem, but I think the problem of generalizability may be more severe if grammaticality judgments come from only a handful of experts. 2. Reliability of the data base. If you weigh your child on your bathroom scales twice in a row, and get a reading of 50 pounds on one measurement and 55 pounds on the next, you need to worry about the reliability of your instrument, i.e. the extent to which it correlates with itself in repeated measurements. Reliability is a serious problem in every science, and is often the culprit when results don't replicate from one laboratory to another (or from one experiment to another in the same laboratory). My experience in graduate courses in syntax and other limited exposure to theoretical linguistics suggests to me that there may be a reliability problem in the use of grammaticality judgments. Even in the same restricted set of judges, with similar sentences materials (see above), a sentence that is ungrammatical at 4 p.m. may become grammatical by 6 o'clock, at the end of a hard day. To be sure, there are many kinds of errors that EVERYONE agrees about, EVERY time they are presented. But these clear cases are not the ones that drive the differences between formal theories, as best I can tell. Theoretical shifts often seem to depend on the more subtle cases -- the very ones that are most subject to the reliability problem. And of course, reliability interacts extensively with the representativeness problem described above (i.e. performance on one half of the target structures in a given category may not correlate very highly with performance on the other half, even though they are all supposed to be about the same thing...). 3. Timing. In a recent paper in Language and Cognitive Processes, Blackwell and Bates looked at the time course of grammaticality judgment, i.e. the point at which a sentence BECOMES ungrammatical for naive native speakers. The punchline is that there is tremendous variability over sentences and over subjects in the point at which a sentence becomes "bad", even for sentences on which (eventually) everyone agrees that a violation exists. For some error types, it is more appropriate to talk about a "region" in which decisions are made, a region that may span a lot of accumulating structure. This is relevant not only to our understanding of grammaticality judgment as a psychological phenomenon, but also to our understanding of the representations that support such judgments: if two individuals decide that a sentence is "bad" at completely different points (early vs. late), then it follows that they are using very different information to make their decision, a fact that is surely relevant for anyone's theory of well-formedness. 4. Context effects. Finally, there are multiple studies showing that violations interact, with each other and with the rest of the sentence and discourse context. A sentence that is "bad" in one context may be "good" in another, and a sentence that is "bad" with one set of lexical items may become "good" with a slightly different set, even though those two sets do not differ along what are supposed to be grammatically relevant conditions (i.e. we substitute a transitive verb for a transitive verb, an animate noun for an animate noun, and so forth). My point is NOT to denigrate linguistic methodology, because I have nothing to offer that is better. But I think the above problems should make us worry a lot about a "core" theory that is built exclusively out of one kind of data. To go back to my first point, in my first volley during this discussion (this should probably be my last, to round things out): we need all the constraints we can get, all the data we can get, all the methods we can find, and it is not yet the moment to declare that any of these methods or fields have priority over the others. -liz bates From dever at VERB.LINGUIST.PITT.EDU Mon Jan 13 18:47:57 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 13 Jan 1997 13:47:57 -0500 Subject: form versus meaning In-Reply-To: <199701131841.KAA24899@crl.UCSD.EDU> Message-ID: Liz and all, I have already said that the methodology of theoretical linguistics 'needs fixed' (Pittburghese). So I have no quarrel with anyone who wants to see improvement there. But, dear Liz, if you really think that your studies are explicating the nature of morphological, phonological, or syntactic structure in anything like the detail or degree (or even quality) of morphology, phonology, or syntax proper, then the problem is deeper than I feared and is not going to be resolved on this list. Maybe next time you are in Pittsburgh we can talk about it. Or maybe you could read some morphology, syntax, or phonology with a view to asking whether you are discovering the basic structures and constructs anew in your studies or finding replacements for the basics. I am at a loss. I did not say that you were squinting at anything sideways. Nor do I mean to denigrate your field of study or results. But you simply are not studying the core nature of x when you study its implementations, acquisition, processing, etc. You are assuming it. I am sorry to have to be the one to break this news to you. -- Dan ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From ellen at CENTRAL.CIS.UPENN.EDU Mon Jan 13 20:39:10 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Mon, 13 Jan 1997 15:39:10 EST Subject: form versus meaning In-Reply-To: Your message of "Mon, 13 Jan 1997 10:41:54 PST." <199701131841.KAA24899@crl.UCSD.EDU> Message-ID: I'd sworn I'd go back into lurkitude, but I can't help it -- these are very important issues to me. Have I missed something or are we talking about different things? I understood Dan to say that syntax, phonology, ... were 'core' and Liz to say there's no such distinction and that psycholinguists were as 'core' as syntacticians. And now we get a long argument from Liz entirely in terms of methodology. What is the relevance? Methodologies can vary and objects of study can vary and they can vary independently... I for one agree with everything Liz says about methodology and everything Dan says about 'coreness', is why this troubles me. P.S.: >Instead of a theory of grammar, we >may have a theory of grammar in Building 10. Would that we did! I believe we'd only have a theory of Building 10's linguistic META-intuitions (conscious, accessible intuitions about their unconscious, inaccessible linguistic intuitions). If we actually had a theory of even ONE person's real grammar (i.e. real, unconscious, inaccessible linguistic intuitions), that would be just fine, as far as I'm concerned. From jaske at ABACUS.BATES.EDU Tue Jan 14 01:28:21 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Mon, 13 Jan 1997 20:28:21 -0500 Subject: methodologies {was Re: form versus meaning} Message-ID: This is related to the methodology strand of this schizophrenic conversation. I think that ALL of us need to learn to look at and analyze language in a different number ways and using different methodologies, not just each one doing their own separate thing. For many years I was an typical syntactician. I looked at sentences in isolation and got real good at parsing and devising fancy trees. Then I started to look at a phenomenon, word order in Basque, which just didn't make sense in those terms. So I started making recordings and spending months and months transcribing just a few hours of tape, paying attention at how people actually speak, intonation, intonation units, pragmatic factors, etc, etc. I can sincerely tell you that that opened my mind. Now I look at language very differently. My conclusion: everybody should learn to look at language in as many ways as possible. Introspecting about it, devising experiments, etc. But I think that the first and primary way should be to look at language as it is actually used. It is different. Believe me. And something else, you would probably have to go through dozens of hours of transcripts to come up with one example of some of the phenomena that fill many theoretical journals these days. The core stuff, you know. Anyway, I tried to keep it short. I sense that some people are starting to get tired. Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Lagun onak, ondu; gaiztoak, gaiztotu "A good friend makes one a better person, a bad one a worse one." From ellen at CENTRAL.CIS.UPENN.EDU Tue Jan 14 06:06:53 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Tue, 14 Jan 1997 01:06:53 EST Subject: methodologies {was Re: form versus meaning} In-Reply-To: Your message of "Mon, 13 Jan 1997 20:28:21 EST." <32DAE135.319E@abacus.bates.edu> Message-ID: jon, i was nodding vigorously in agreement all thru your post on methodologies -- until i did a doubletake at this line: >And something >else, you would probably have to go through dozens of hours of >transcripts to come up with one example of some of the phenomena that >fill many theoretical journals these days. The core stuff, you know. i did a doubletake because i first read it as an argument for NOT using naturally-occurring data, which didn't gibe with the preceding paragraphs. then it hit me that you might have meant something quite different... are you perhaps saying that low frequency phenomena are any less crucial to the whole story than high frequency ones?!? presumably you wouldn't think much of a theory of hematology that left out type o blood (or whichever is the least frequent)? in fact, in syntax at least, it's mainly in the rarer forms that you see what is actually going on, structurally speaking... in any event, my real response to your line above is: thank goodness for the humongous online corpora we have today! and let's hope they're all fully parsed and tagged real soon. we are really the first 'generation' of linguists that CAN work on low frequency phenomena using naturally-occurring data (and without having to have the perseverence and energy of a jespersen). i actually find this the most exciting thing to have happened in my professional lifetime -- it has made my own methodology of choice actually feasible, plus it has enabled historical syntacticians to do some really sound work (given the total impossibility of intuitionistic or experimental data for dead lgs). From bralich at HAWAII.EDU Tue Jan 14 07:04:53 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Mon, 13 Jan 1997 21:04:53 -1000 Subject: methodologies {was Re: form versus meaning} Message-ID: At 03:28 PM 1/13/97 -1000, Jon Aske wrote: >This is related to the methodology strand of this schizophrenic >conversation. >My conclusion: everybody should learn to look at language in as many >ways as possible. Introspecting about it, devising experiments, etc. >But I think that the first and primary way should be to look at language >as it is actually used. It is different. Believe me. And something >else, you would probably have to go through dozens of hours of >transcripts to come up with one example of some of the phenomena that >fill many theoretical journals these days. The core stuff, you know. Well, I hate to sound a somewhat controversial note, especially when I agree with the majority of what is being said, but I think it should be born in mind that, at some level syntax is represents organizational principles that may not be completely visible through usage and because of this, it might be necessary to do a lot of the work in this area based on the judgements of the experts. If we limited ourselves to the mathematics that is used in daily life, we would have very primitive mathematics, and if linguists limit themselves to what occurs in very daily life, we would (and do) have very primitive linguistics. Linguists might have a good description of daily life language, but they do not have a very good picture of the organizational principles that are represented in langauge. Now at some point we as linguists are going to have to find some measure of the value of our research to justify our existence in these days of budget cuts. If we as a discipline can present nothing to the outside world besides squabbling that is meaningless except to a few experts, our days are numbered. Especially if this squabbling looks as though we have yet to agree on who we are and what we do. So whether or not we resolve any of these disputes, it seems to me the onus on the discipline should be on bringing forth some tangible result of the 30 years of research that has given us linguistics departments and jobs. I realize that we as insiders to this field can point to many contributions that have been made by linguists and others in this area; however, I doubt that there are many others outside the field (even among those who decide the future of departments) who have any idea what it is we do, even after 30 years. And if we cannot say who or what we are, how can they be expected to continue to fund us. Phil Bralich what if we limited our knowledge of math to Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From cumming at HUMANITAS.UCSB.EDU Tue Jan 14 18:53:50 1997 From: cumming at HUMANITAS.UCSB.EDU (Susanna Cumming) Date: Tue, 14 Jan 1997 10:53:50 -0800 Subject: Language in daily life Message-ID: Bralich says, "if linguists limit themselves to what occurs in very daily life, we would (and do) have very primitive linguistics." We may have a very primitive linguistics -- I hope so, because that would imply we're going to know a lot more someday -- but that would certainly not be because we know about language that occurs in daily life. Indeed, this is precisely the kind of language we know least about, because it is only very recently that linguists have had the tools they needed to look at it seriously. As Aske has pointed out, if you take everyday, interactional, spoken language seriously on its own terms -- that is, without editing it first into something that resembles written language -- you have to start by abandoning or at least fundamentally re-examining many of the basic concepts that underlie "traditional" linguistics, for instance "sentence". This is why some of us feel strongly that no matter what tools the linguist has in their tool-bag -- and sure, I agree that the more we have the better -- one of them is in fact a sine qua non: access to natural, interactional, spoken discourse. Experience shows that such data tends to lead to radically different conclusions at the levels both of description and of explanation. As far as the impact of linguistics on the outside world is concerned, surely it is by knowing something about what people really do with language that is going to have practical applications that will impress non-linguists. A computational linguist in particular should appreciate this -- effective interfaces which use natural language need to be able to deal with actual speaker-hearers, not idealized ones, and they need to be able to take into account the dynamics of interaction, as much exciting work in computational linguistics is doing these days. Susanna From jaske at ABACUS.BATES.EDU Tue Jan 14 19:14:36 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Tue, 14 Jan 1997 14:14:36 -0500 Subject: methodologies {was Re: form versus meaning} Message-ID: I didn't mean to imply that we should restrict our analysis to that which is commonest or most trite, but rather that we should base of 'theoretical edifice' on it. If we don't have a solid grasp on the most basic stuff, we will never understand the nature of that which is more complex and more rare (or why it is more rare). If we base our edifice on that which is unusual and relatively complex we are likely to misunderstand that which is basic (to speakers and to language/s). Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Munduan nahi duenak luzaroan bizi, oiloekin ohera eta txoriekin jagi "If you want to live long, go to bed with the chickens and get up with the birds." From bralich at HAWAII.EDU Tue Jan 14 20:05:47 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Tue, 14 Jan 1997 10:05:47 -1000 Subject: Language in daily life Message-ID: At 08:53 AM 1/14/97 -1000, Susanna Cumming wrote: >We may have a very primitive linguistics ... description and of explanation. Good point. >As far as the impact of linguistics on the outside world is concerned, >surely it is by knowing something about what people really do with >language that is going to have practical applications that will impress >non-linguists. A computational linguist in particular should appreciate >this -- effective interfaces which use natural language need to be able to >deal with actual speaker-hearers, not idealized ones, and they need to be >able to take into account the dynamics of interaction, as much exciting >work in computational linguistics is doing these days. Things are beginning to show up on computers and in the marketplace, but I am skeptical about the state of the art. There are more patches and fixes then there are real tools for my way of thinking. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From ellen at CENTRAL.CIS.UPENN.EDU Tue Jan 14 20:27:16 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Tue, 14 Jan 1997 15:27:16 EST Subject: Language in daily life In-Reply-To: Your message of "Tue, 14 Jan 1997 10:53:50 PST." Message-ID: huh? speak for yourself, please. i certainly have NOT found that spoken language is very different from what one would think. the main 'surprises' i have found are: 1. certain forms that have been claimed to be ungrammatical do in fact seem to be well-formed but in very constrained contexts (e.g. for english: resumptive pronoun relative clauses where no island violations are involved, topicalized indefinites, nonrestrictive _that_...). 2. certain claims about genre/style/... distribution are unfounded (e.g. the claim that left-dislocation is characteristic of 'unplanned' speech, the 20-yr-ago claim that only yinglish speakers could topicalize certain things...). 3. most claims about topichood and focus. i certainly have found no evidence to question the notion 'sentence' -- au contraire -- just try to account for how entities that are introduced by quantified expressions are referred to subsequently without a notion of 'clause', much less sentence! and none of the computational linguists whose work i find interesting have abandoned the notion 'sentence' either... i hope people are more careful making generalizations about their data than some people seem to be when making generalizations about 'what funknetters believe' or what 'people who work on interactional, spoken language find'... Susanna Cumming wrote: >Bralich says, > >"if linguists limit themselves to what occurs in very daily life, we would >(and do) have very primitive linguistics." > >We may have a very primitive linguistics -- I hope so, because that would >imply we're going to know a lot more someday -- but that would certainly >not be because we know about language that occurs in daily life. Indeed, >this is precisely the kind of language we know least about, because it is >only very recently that linguists have had the tools they needed to look >at it seriously. As Aske has pointed out, if you take everyday, >interactional, spoken language seriously on its own terms -- that is, >without editing it first into something that resembles written language -- >you have to start by abandoning or at least fundamentally re-examining >many of the basic concepts that underlie "traditional" linguistics, for >instance "sentence". This is why some of us feel strongly that no matter >what tools the linguist has in their tool-bag -- and sure, I agree that >the more we have the better -- one of them is in fact a sine qua non: >access to natural, interactional, spoken discourse. Experience shows that >such data tends to lead to radically different conclusions at the levels >both of description and of explanation. > >As far as the impact of linguistics on the outside world is concerned, >surely it is by knowing something about what people really do with >language that is going to have practical applications that will impress >non-linguists. A computational linguist in particular should appreciate >this -- effective interfaces which use natural language need to be able to >deal with actual speaker-hearers, not idealized ones, and they need to be >able to take into account the dynamics of interaction, as much exciting >work in computational linguistics is doing these days. > >Susanna From pesetsk at MIT.EDU Tue Jan 14 21:52:41 1997 From: pesetsk at MIT.EDU (David Pesetsky) Date: Tue, 14 Jan 1997 16:52:41 -0500 Subject: three final replies In-Reply-To: Message-ID: A CLARIFICATION: At 10:42 AM +0200 1/13/97, John Myhill wrote: > To David P.: You write: > > Discussions of these alternatives can and do change minds > (mine, for instance). > > You've changed your mind about the autonomy thesis? You used to not believe > (didn't used to believe?) in autonomous syntax? After you got out of > graduate school? Did you put this in print anywhere? > And you got a job at MIT? Am I understanding you correctly? Please clarify. > (I'm not being facetious, I really am interested in > this) Sorry, nothing that exciting. What I said was the following. The disagreement that has occupied us the most here is really a difference over hunches, interests and research strategies. So there can't be much question of true and false, nor is the notion "changing one's mind" well-defined for hunches and interests. On the other hand, this "hunch-level" disagreement does produce analyses and discussions of particular phenomena which, not surprisingly, can be true and false, and can be matters on which one changes one's mind. I have been in countless discussions about whether some phenomenon is properly attributable to a discourse factor, to a property of sentence-internal syntax, or some mixture of these. On such matters, people's opinions should, can, and do change in response to reasoned discussion and argument. That's what I meant. ***************** ON THE NOTION "INTERESTING": I wrote: > The problems > arise *after* we've offered our varying interpretations of the data. > Do we defend our interpretation with specious propositions like 3-7? > Or do we try to discover the truth? To which John Myhill replied: > I agree with you in principle, but unfortunately that is not the tone the > discussion (such as it is) has taken. To take the most blatant example, > Chomsky's favorite 'defense' of whatever approach he feels like pursuing at > the moment has always been that it is 'interesting,' (your specious > proposition #3), [...] [Chomsky quotes omitted] > > Such examples could be multiplied many times over [...] > This is particularly significant, and > worrying, because the great majority of Chomsky's followers appear to be > similarly basing their choice of approach on what Chomsky finds > 'interesting' as well, to judge by the general lack of serious effort to > give more convincing arguments for this approach. I assume that you (David) > yourself are thinking something similar about functionalists, so this > appears to be a general property of the field. I think it's a general property of *people*. If we're given the opportunity, we do what we find most interesting. Then we act as though "interesting" is an argument for something. Sure Chomsky's guilty of this. Who isn't? It's even argued that the false argument serves the useful purpose of focusing research, though that, of course, is a two-edged sword (to mix a metaphor). The trick is to learn how to see where "interesting" is being used as an argument, discard that non-argument without rancor, and examine what's left in a serious fashion. On a related issue, do consider the possibility that at least some of the people you call "Chomsky's followers" look like followers because they share (some of) his interests -- rather than sharing his interests because they're followers. ***************** FINALLY: Jon Aske wrote (two days ago, sorry for the delay): > David, I'm sorry if I put words into your mouth. I was going by my > interpretation (corroborated by many others) of what people in your > school, not necessarily you yourself, have been saying for the last few > decades, at least until the last time I checked. Thanks for your remarks. It's an easy but unproductive shortcut to criticize X for what Y says Z (who went to graduate school with X) thinks. But it's usually also unfair. The issue at hand was a certain characterization of work on functional categories. I'd like to address that further, but I don't think I can do that here and now. A book currently being written by Guglielmo Cinque may soon be the best place to look for good work (in my linguistic neck of the woods) on the topic. But that's not fully written yet. He cites lots of the typology literature, by the way. > Perhaps my > interpretation was erroneous. If so, I am quite willing to stand > corrected. I think that that is what this discussion (I don't dare call > it "dispute") is all about. I feel, and I'm sure many others do too, > that we need a lot more communication in our field. It may turn out > that we agree on more things than we ever thought we did. I suspect the opposite. I suspect that we *disagree* on more things than we ever thought we did. But what's wrong with that, so long as discussions address the real disagreements -- not specious ones rooted in primeval animosities or based on logic like proposition 7 of my previous message? (Anyone look it up?) Thanks for the discussion, David Pesetsky ************************************************************************* Prof. David Pesetsky, Dept. of Linguistics and Philosophy 20D-219 MIT, Cambridge, MA 02139 USA (617) 253-0957 office (617) 253-5017 fax http://web.mit.edu/linguistics/www/pesetsky.html From ward at PG-13.LING.NWU.EDU Wed Jan 15 18:44:36 1997 From: ward at PG-13.LING.NWU.EDU (Gregory Ward) Date: Wed, 15 Jan 1997 12:44:36 CST Subject: three final replies In-Reply-To: ; from "David Pesetsky" at Jan 14, 97 4:52 pm Message-ID: david pesetsky writes: > I have been in countless discussions about whether some phenomenon is > properly attributable to a discourse factor, to a property of > sentence-internal syntax, or some mixture of these. On such matters, > people's opinions should, can, and do change in response to reasoned > discussion and argument. That's what I meant. but, alas, they often don't. case in point: in a recent language paper (vol 71:722-42), betty birner and i present evidence for a phenomenon being motivated by discourse rather than by 'sentence-internal syntax'. the phenomenon in question is the so-called definiteness effect in postverbal position in english existential there-sentences. however, one still sees many references to such an effect (without discussion or justification) in the formal syntax literature. now it is of course possible that one could come up with a strictly syntactic account of the (indisputable) occurrence of definite postverbal NPs in this construction (although i doubt it :-) ), but until that time, unqualified references to a syntactically-motivated 'definiteness effect' should simply disappear. but that hasn't happened. in fact, the discourse accounts (and there are several) often aren't even cited (not even in a dismissive "but cf." kind of way). this is the state of affairs that is so frustrating to those of us, inter alia, who believe in the existence of both discourse and syntax, and who try to listen to what practitioners of both both have to say. gregory -- Gregory Ward Department of Linguistics Northwestern University 2016 Sheridan Road Evanston IL 60208-4090 e-mail: gw at nwu.edu tel: 847-491-8055 fax: 847-491-3770 www: http://www.ling.nwu.edu/~ward > > ***************** > > ON THE NOTION "INTERESTING": > > I wrote: > > > The problems > > arise *after* we've offered our varying interpretations of the data. > > Do we defend our interpretation with specious propositions like 3-7? > > Or do we try to discover the truth? > > To which John Myhill replied: > > > I agree with you in principle, but unfortunately that is not the tone the > > discussion (such as it is) has taken. To take the most blatant example, > > Chomsky's favorite 'defense' of whatever approach he feels like pursuing a= > t > > the moment has always been that it is 'interesting,' (your specious > > proposition #3), [...] > > [Chomsky quotes omitted] > > > > Such examples could be multiplied many times over [...] > > This is particularly significant, and > > worrying, because the great majority of Chomsky's followers appear to be > > similarly basing their choice of approach on what Chomsky finds > > 'interesting' as well, to judge by the general lack of serious effort to > > give more convincing arguments for this approach. I assume that you (David= > ) > > yourself are thinking something similar about functionalists, so this > > appears to be a general property of the field. > > I think it's a general property of *people*. > > If we're given the opportunity, we do what we find most interesting. Then > we act as though "interesting" is an argument for something. > > Sure Chomsky's guilty of this. Who isn't? It's even argued that the false > argument serves the useful purpose of focusing research, though that, of > course, is a two-edged sword (to mix a metaphor). > > The trick is to learn how to see where "interesting" is being used as an > argument, discard that non-argument without rancor, and examine what's left > in a serious fashion. > > On a related issue, do consider the possibility that at least some of the > people you call "Chomsky's followers" look like followers because they > share (some of) his interests -- rather than sharing his interests because > they're followers. > > ***************** > =46INALLY: > > Jon Aske wrote (two days ago, sorry for the delay): > > > David, I'm sorry if I put words into your mouth. I was going by my > > interpretation (corroborated by many others) of what people in your > > school, not necessarily you yourself, have been saying for the last few > > decades, at least until the last time I checked. > > Thanks for your remarks. It's an easy but unproductive shortcut to > criticize X for what Y says Z (who went to graduate school with X) thinks. > But it's usually also unfair. > > The issue at hand was a certain characterization of work on functional > categories. I'd like to address that further, but I don't think I can do > that here and now. A book currently being written by Guglielmo Cinque may > soon be the best place to look for good work (in my linguistic neck of the > woods) on the topic. But that's not fully written yet. He cites lots of > the typology literature, by the way. > > > Perhaps my > > interpretation was erroneous. If so, I am quite willing to stand > > corrected. I think that that is what this discussion (I don't dare call > > it "dispute") is all about. I feel, and I'm sure many others do too, > > that we need a lot more communication in our field. It may turn out > > that we agree on more things than we ever thought we did. > > I suspect the opposite. I suspect that we *disagree* on more things than > we ever thought we did. But what's wrong with that, so long as discussions > address the real disagreements -- not specious ones rooted in primeval > animosities or based on logic like proposition 7 of my previous message? > (Anyone look it up?) > > > Thanks for the discussion, > David Pesetsky > > > > > > > ************************************************************************* > Prof. David Pesetsky, Dept. of Linguistics and Philosophy > 20D-219 MIT, Cambridge, MA 02139 USA > (617) 253-0957 office (617) 253-5017 fax > http://web.mit.edu/linguistics/www/pesetsky.html > From maj at COCO.IHI.KU.DK Fri Jan 17 15:34:20 1997 From: maj at COCO.IHI.KU.DK (Maj-Britt Mosegaard Hansen) Date: Fri, 17 Jan 1997 16:34:20 +0100 Subject: Book review Message-ID: As review editor for the _Revue romane_, I'm looking for someone who'd be willing and able to write a 1-3 page review *in French* of the following volume: Kronning, Hans. 1996. Modalite, cognition et polysemie : semantique du verbe modal 'devoir'. Uppsala: Acta Universitatis Upsaliensis. If anyone out there would like to undertake this task, please reply to maj at coco.ihi.ku.dk Thanx in advance! Maj-Britt Mosegaard Hansen Dept. of Romance Languages U. of Copenhagen From edith at CSD.UWM.EDU Fri Jan 17 23:50:16 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Fri, 17 Jan 1997 17:50:16 -0600 Subject: form without meaning Message-ID: ===> To LIZ BATES: Liz, you suggested (Saturday, January 11) that the claim that syntactic form could be described independently of meaning assumed that syntactic classes were of the classical type, with strict category membership. You further pointed out that syntactic categories were not in fact of this sort and that their fuzzy nature was difficult to explain without reference to their meanings. I agree that, for EXPLAINING the existence of natural syntactic categories, we may have to resort to studying their meanings. This does not mean, however, that natural classes cannot be discovered and described on strictly formal grounds; in this respect, I agree with Dan Everett's (same-day) response to you. I found your later message on the four problems in obtaining grammaticality judgments very interesting and instructive! ===> to JON ASKE: Jon, in your response to my response to your original posting (Sunday, January 12), you wrote this: "I just don't see why we would want to restrict our linguistic analysis to the more formal aspects of constructions and ignore their function/meaning pole, their history, and so on. .... To me, studying the formal aspects of such constructions without looking at what they are made for, how they are made, etc. *as a matter of principle*, just does not make sense. I came to the early realization that these constructions should not be studied as merely formal operations. These constructions exist for a purpose, and their form reflects the function that they arose for in the first place, even if they have picked up additional bagage along the way. And to me that is the most interesting part of analysing language/grammar." I agree with almost all of this. In particular, I agree that a/ linguistic analysis should not be restricted to form, with function ignored (in my contribution, I did not mean to suggest the opposite) b/ constructions exist for a purpose and figuring out the extent and the ways they reflect function is the most interesting part of analysing grammar. Where I may not agree is that studying the form of constructions without looking at their function makes no sense. It really depends on what you mean by "studying". If you mean "giving a complete account", then I fully agree: describing the functions of linguistic form and how they correlate with form is part of a complete account. But if by "studying" you mean "restricting momentary attention" (where "momentary" may be taken on a grand sale, possibly extending to the lifetime of a linguist), then I cannot agree. I see the study of linguistic form as a logically necessary step in arriving at a complete account of linguistic constructions since, as Talmy Givo'n pointed out (Saturday, January 11), if a complete account involves specifying a (cor)relation between form and function, this presupposes that we have an independent characterization of both form and function. This point does not have to do with research schedule: I am not proposing that all of form needs to have been discovered before we can begin to look at function. The two lines of research usually go in tandem I believe. Rather, the point has to do with the logical priority of a description of form and a description of meaning over an account of the relationship between the two. At the very real risk of battling a straw man or beating a dead horse (and without attributing this extreme view to Jon Aske), let me note that what the idea - when taken in a literal sense - that the form of functional objects cannot be described unless one knows the associated functions would amount to is that the usual descriptive tools we use for characterizing the form of a non-functional object would simply fail us in the case of functional objects: we would have to hold off on their formal description until we found out about their functions. This would mean, for example, the following: - One could describe the formal structure of a string of beads a child would create with no purpose in mind but not the shape of a rosary - unless one knew that the beads stood for various prayers. - One could describe the chemical composition of naturally occurring materials but not that of synthetic drugs, unless we knew what each component was supposed to contribute to the intended healing effect. - One could describe the form of a musical composition free of containing designated motifs with explicit meanings but not the form of a Wager opera - unless one knew what each motif stood for. - One could describe the random hand-flailings of an infant, but not the hand gestures of body language or sign language, unless one knew the meanings of the gestures. - One could describe the form of a piece of rock naturally shaped as a hammer but not the shape of a real hammer, unless one recognized it for an instrument for pounding in nails. - One could describe the form of the Easter Island statues just in case they were meant to be non-functional; if they were meant to be functional, no description would be possible unless one learnt what the functions were. - One could describe the form of non-symbolic carvings on a rock surface but not if those carvings happened to be samples of writing; in which case we would have to discover what each symbol stood for before being able to characterize the forms of the symbols. Such descriptive impoasses caused by lack of knowledge about the function of the object to be described clearly do not arise. Where knowlefge about function comes in for the analysis of form is on the explanatory, rather than descriptive, level: in helping to explain why the form of a functional object is the way it is. Best - Edith ************************************************************************ Edith A. Moravcsik Department of Linguistics University of Wisconsin-Milwaukee Milwaukee, WI 53201-0413 USA E-mail: edith at csd.uwm.edu Telephone: (414) 229-6794 /office/ (414) 332-0141 /home/ Fax: (414) 229-6258 From jaske at ABACUS.BATES.EDU Sat Jan 18 04:00:04 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Fri, 17 Jan 1997 23:00:04 -0500 Subject: form without meaning Message-ID: Edith (and all), Thank you for your very interesting and thought-provoking posting. It made me think and here are the results, in condensed form, of my thoughts about all this. A lot of the things I am arguing against do not necessarily follow from what you said, that is I am not saying that that is your position, though it may be an extreme version of your basic position. I am just trying to understand why anybody would want to exclude function and explanation and non-formal factors from description at all. None of this is meant to be taken personally by anyone. (Paraphrasing what David said the other day, I do not believe who people who disagree with me have "character flaws"). I agree with you that the description of the data is our primary goal. Constructions, no matter how well motivated, may have a great deal of idiosynchratic characteristics which have to be described and, as long as we don't understand the functions involved, all we can do is describe the facts in as much detail as we can and draw as many generalizations as we can, facts and generalizations about form, *and* about meaning, about usage, about anything and everything which correlates with form and any variations in form. And, in many cases at least, these descriptions are but steps that we follow in order to understand the constructions and what they do and why they are the way they are. That is, while we describe, we can, and should, start making guesses as to what motivates the constructions, diachronically *and* synchronically. In my work I deal with speech act constructions and here I find that discourse pragmatics and information structure are central to understanding these constructions: their form and their uses. Notions such as topic, focus, topicality and "focality", contrast, emphasis, scope, etc., etc. And the motivations I see here are not just diachronically sedimented on the constructions in question, and irrelevant to the synchronic description of the the constructions. I believe that the motivations for these constructions are to a large extent synchronically transparent and real for the speakers as well, that is, that they are part of the speakers' representations of those constructions. At least I want to find out to what extent they are synchronically motivated for speakers. I think this is an important and major thing that we should attempt to do. Surely, these categories and principles, which are iconically reflected more or less transparently in different aspects of the constructions, are also mixed with other more or less arbitrary aspects, the result of different constraints and extensions added to these constructions throughout their history. All that has to be described too, and not swept under the rug just because we don't understand it or dismiss it as uninteresting just because we cannot understand it.. So, for instance, I do not believe that we can describe (in any meaningful way) the English passive construction, or the dative shift construction, left dislocation, right dislocation, do-support, inversions of different types (from canonical order), question constructions, and a great number of other constructions, all major constructions, all constructions which in which discourse pragmatics properties and roles are involved, without the function (discourse-pragmatics) of these constructions and the elements of these constructions playing a central part in those descriptions. The passive construction, for instance, does not exist somewhere in some real or ideal grammatical realm and then it is put to some arbitrary use because it happens to be there, which is how I feel that some linguists approach this construction. The passive cosntruction exists to perform a function, or a set of functions in different contexts, and it has the form it does to a great extent because of the functions that it is designed to express. I believe that that is central to the construction, and not an ancillary issue which can be left for other investigators to worry about. When we describe a construction we have to describe the details of the form *and* semantic and pragmatic charactersitics of the constructions, the patterns of use, and so on and so forth, and along with all these the functions of the constructions and the possible reasons for their form, whether they are only diachronic or partially or fully synchronic as well. I just don't see how we can possibly ignore all these things while we are describing some aspect of a language. And we had better look at other languages along the way to see how the functions performed by the passive constructions in English are performed in them. And if they don't have an equivalent construction to the passive construction then we should try to figure out why. And if there is a passive-like construction and it's used differently, we should figure out how they are used differently and attempt to understand why. What parts of those languages system pick up the slack, and so on and so forth. Here I feel compelled to bring up an analogy from another science and I hope I won't be unduly chastised for my boldness and my ignorance. I just can't imagine that a biologist, for example, would attempt to describe a particular organ in some organism without at the same time attempting to understand its function (what it's for), how it may have gotten to be the way it is, what it does, how it does it, how it interacts with other organs of the body, and how it compares with the way other organisms perform those functions. Surely a lot of descriptive work will have to be done before the organ is fully understood, or understood as well as it can be understood, but I doubt that functional considerations will be ignored during the descriptive stage, or, even worse, completely ignored as unworthy of study and uninsteresting. Surely we'll come across things such as the appendix which doesn't seem to have a function (though it may have at one time), surely we'll come across things that we don't understand, and things that we will never understand, but how can we even dismiss the search for understanding from the start? I think I'll stop here. I do tend to get carried away. I am just trying to understand. If I got it all wrong, or some of it wrong, I want to know. Best, Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Eguzkia nora, zapiak hara "Where the sun is, that's where you should hang your clothes." From edith at CSD.UWM.EDU Sat Jan 18 20:22:24 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Sat, 18 Jan 1997 14:22:24 -0600 Subject: LT Message-ID: "LINGUSTIC TYPOLOGY" - a new journal The first issue of _Linguistic Typology_ is about to appear at Mouton de Gruyter. It contains articles by Scott DeLancey, Simon Kirby, Frans Plank & Wolfgang Schellinger as well as reviews by Bernard Comrie, Edith Moravcsik, and Michael Noonan. _LT_ - the publication of the Association for Linguistic Typology (ALT) - will be published in three issues per year with a total of approximately 400 pages. Submissions for subsequent volumes are encouraged. Studies of particular parameters or clusters of parameters of typological variation, papers on the theory and methodology of typology, as well as brief reports on typological implications, language or language family profiles, topical bibliographies, and items on the history of typology are all welcome. For subscription, submission, and further information, please contact the Editor-in-Chief, Frans Plank. (MAIL: Sprachwissenschaft, Universitaet Konstanz, Postfach 5560 D175, D-78434 Konstanz, Germany; FAX: 49-7531-882741; E-MAIL: frans.plank at uni-konstanz.de) From edith at CSD.UWM.EDU Sat Jan 18 20:09:49 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Sat, 18 Jan 1997 14:09:49 -0600 Subject: call for papers, ALT Message-ID: ALT II - CALL FOR PAPERS Abstract are being invited for the second meeting of the Association for Linguistic Typology (ALT II), to be held at the University of Oregon, Eugene, from September 11 to September 14, 1997 (Thursday through Sunday). Given that this will be the first meeting of ALT in the US, the membership requirement for presenters has been waived. Please feel free to send in an abstract regardless of whether you are a member of ALT or not. Please direct SIX copies of a one-page abstract to the chair of the program committee, Prof. Masayoshi Shibatani (address below), to reach him no later than MARCH 1, 1997. A second page (six copies) may be attached to the abstract listing data. E-mail submissions are also accepted. The program committee will, by May 1, 1997, convey its decision to those submitting abstracts. Each abstract should include the author's (or authors') name and mailing address (just one mailing address for multiple authors) including telephone, fax, and e-mail address as available. Each abstract should specify the amount of time requested for the presentation, including discussion, which may be 30, 45, or 60 minutes. You may also submit abstracts for symposia, in which case give the names of participants and the amount of time requested (which may, of course, exceed 60 minutes). Address for mailing abstracts: Masayoshi Shibatani Faculty of Letters, Kobe University 1-1, Rokkodai-cho, Nada-ku Kobe 657, Japan E-mail: matt at icluna.kobe-u.ac.jp From jaske at ABACUS.BATES.EDU Sat Jan 18 20:16:10 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Sat, 18 Jan 1997 15:16:10 -0500 Subject: form without meaning Message-ID: I wanted to add something to what I said yesterday. I have received one private response to that posting, in which Edith's rosary analogy was claimed to be instrumental in explaining the need to separate form from function, and I would like to explain why I do not think that that analogy is valid. If someone unfamiliar with rosaries came across one, s/he could describe its form, but would be missing a big part of the picture if the description omitted the function of the rosary. It would be a very limited description, one which we would accept if we had no other choice, but not one that I would be satisfied with. Furthermore, and this is crucial, the rosary, as an artifact, has a formal (physical) existence in the world apart from its function, but I don't think that linguistic constructions do. I believer that constructions exist in the *minds* of speakers and that they are learned and represented as forms with functions attached to them. Thus, to the extent that we are trying to describe linguistic 'competence' (in addition to 'performance' and the relation between the two) I think that we have to describe constructions in their totality. I do not believe, for instance, that in Basque, and in other languages in which the order of constituents in asserted clauses depends primarily on pragmatic characteristics (roles and statuses) of the ideas represented by those constituents, one can describe such ordering without resorting to those pragmatic categories and statuses. I believe that this has been recognized even by formalists, who have resorted to (formal?, functional?) categories such as [+FOCUS] or [+TENSE] or [+INFL] to account for the facts (that is what David was talking about the other day). That is a significant step forward I believe, but the way it is implemented seems to me to be more of an attempt to salvage a faulty model of linguistic units/systems than anything else. One more thing. I believe that all so-called structural units in language are cognitive units. What holds things together in any construction are semantic and pragmatic (informational) 'forces' 'binding' those elements together. What some people call a VP (a syntactic unit), for instance, exists only to the extent that there are semantic and pragmatic reasons for holding some elements of the clause together, while excluding others (the clause itself is a cognitive unit par excellence). But, unless we view those 'forces' as being functional (semantic and pragmatic) in nature and having a variable nature (non-referential objects, for instance, are bound to verbs more strongly than referential and topical ones), and as being overridden under certain circumstances (some clauses do not have topics), we will end up believing silly things, such as that some languages have VPs while others don't. And it just isn't that simple. Anyway, I'm going back to lurking too, just like everyone else. Unless someone gets me going again, of course. Have a great weekend, Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Hiru belarritan igaren hitz isila, orotan lasterka dabila "A secret that has been through three ears, won't remain a secret much longer." From TGIVON at OREGON.UOREGON.EDU Sat Jan 18 19:57:46 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Sat, 18 Jan 1997 11:57:46 -0800 Subject: etc. Message-ID: 1-17-97 Dear FUNKfriends, Now that the traffic has subsided somewhat, I want to take the opportunity and tell you how valuable I thought the last burst of discussion--thanks to Phil Bralich who, rather unintentially, I suspect, wound up starting it-- really. I saw it as a beautiful example of communal thinking I have always thought we started FUNKNET just for that (rather than for book and conference announcements, however useful those may be). So to me this discussion demonstrated that FUNKNET can serve its intended purpose -- even if it does it only once a year. The following comments are not intended thus as grabbing the last word, but rathe as part of this progressive refinement of our communal thinking. I thought Matt Dryer and Liz Bates defined the two poles of our discussion most succintly. What I would like to suggest here that the two poles of our practice of linguistics -- theory and methodology -- are indeed intimately connected. Matt suggested two "theses" of our approach to structure: (a) STRONG: "grammatical structure strongly correlates to semantic and pragmatic functions" (b) WEAKER: "grammatical structure exists" It might perhaps be useful to point out that **logically** a belief in A entails a belief in B. That is, if (a) is asserted, (b) must be presupposed. But, at the same breath, (c) must also be presupposed: (c) "semantic and pragmatic functions exist" Edith Moravcsik's latest comments indeed pursued this logic: If you believe in (a), then you must define **both** structure and function independently of each other. That is, in my terms --by different methods. Otherwise, all you are left with is *a tautology**. On the methodological end, I think Liz Bates (and Lise Menn) expressed our need for multiple methodology rather elegantly. But notice that, among other reasons, the logic of (b) and (c) above being presupposed by our strong belief in (a) already points at the need of multiple methodologies. We obviously need methods that probe into structure QUA structure. And the traditional -- Bloomfieldian, Chomskian -- methods of analyzing clause structure and morphology come in handy precisely for this reason. Indeed, I cannot imagine studying and describing the grammar of a new language I work on **without** such methods. Have you tried recently to go **directly** to studying discourse-pragmatic functions lately? And are your results yielding form-function correlations? For people like Fritz Newmeyer and Dave Pesetsky, whose contribution to our discussion was truly valuable, the terrain might look like this (and do forgive me for the hypothetical nature of (1)-(4) below): (1) We certainly see some correlations of the (a) type; (2) But, they are either sporadic or never 100%. (3) Therefore, to be really rigorous and not go on a (frail) limb, we cannot abide by the strong assertion (a); we will therefore confine our investigation of syntax to what is obvious -- obvious from using **only** the traditional clause-level methodology. (4) So, we will only describe structures, and develope and independent theory of syntactic structures. Now, notice that the vast majority of communicative functions do not reveal themselves, in any obvious/intuitive way, if you confine yourself to the traditional methodology. i.e. to the study of isolated clauses outside their discourse context. So much of the doubt expressed by Fritz and David about the partiality and non-systematicity of form- function pairing (i.e. Matt's principle (a)) must indeed be traced to their reluc- tance to go beyond the traditional clause-level methodology. This is in no way a **logical** necessity, but rather a pragmatic methodological consequence. Just as you cannot get at structure without the appropriate methods, so you cannot get at communicative-pragmatic functions without the appropriate methothology; that is, without studying what grammar does in actual communication. What has always baffeled me, suppose--ever since reading and heartly approving of Chomsky's (and Postal/Katz' and Fillmore') drift, between 1962 and 1965, to **semantically-relevant** (and thus more astract) deep "syntactic" structure--is the seeming reluctance of generative linguists to take the rather obvious next plunge. Propositional semantics was licensed by Aspects (1965) as being strongly correlated to syntax, i.e. to "deep structure". So why not take the obvious next plunge and admit that the "stylistic transformations", those Joe Emonds characterized in his disser- tation as "root transformations", are just as relevant to syntax (and syntax relevant to them) as the "triggered" transformations (Joe's "structure- preserving" transformations)? In other words, if you've already opened the doors of syntax to semantics, why don't you open it further to pragmatics? Here I think is where, inadvertently, implicitly, methodology rears its sweet head. If you don't practice the methodology of looking for what syntactic structures do in communicative context, then pragmatic function remains rather invisible to you. You sense its existence, but it remains mysterious, unwieldy and highly suspec. You approach it with the same an inborn skepticism that Bloomfield and Carnap and the Positivists did, as "stylistic" intuition that cannot be captured **rigorously** by science. On reflection then, what we've got here is a fairly transparent case, leastwise to me, of the methodological tail has continuing to wag the theoretical dog. With apologies for the long-windedness, TG From edith at CSD.UWM.EDU Sat Jan 18 22:24:08 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Sat, 18 Jan 1997 16:24:08 -0600 Subject: call for papers, ALT Message-ID: Forwarded message: >>From edith Sat Jan 18 14:09:49 1997 From: Edith A Moravcsik Message-Id: <199701182009.OAA28743 at alpha1.csd.uwm.edu> Subject: call for papers, ALT To: edith, funknet at rice.edu, linguist at tamvm1.tamu.edu Date: Sat, 18 Jan 1997 14:09:49 -0600 (CST) X-Mailer: ELM [version 2.4 PL24alpha3] Content-Type: text ALT II - CALL FOR PAPERS Abstract are being invited for the second meeting of the Association for Linguistic Typology (ALT II), to be held at the University of Oregon, Eugene, from September 11 to September 14, 1997 (Thursday through Sunday). Given that this will be the first meeting of ALT in the US, the membership requirement for presenters has been waived. Please feel free to send in an abstract regardless of whether you are a member of ALT or not. Please direct SIX copies of a one-page abstract to the chair of the program committee, Prof. Masayoshi Shibatani (address below), to reach him no later than MARCH 1, 1997. A second page (six copies) may be attached to the abstract listing data. E-mail submissions are also accepted. The program committee will, by May 1, 1997, convey its decision to those submitting abstracts. Each abstract should include the author's (or authors') name and mailing address (just one mailing address for multiple authors) including telephone, fax, and e-mail address as available. Each abstract should specify the amount of time requested for the presentation, including discussion, which may be 30, 45, or 60 minutes. You may also submit abstracts for symposia, in which case give the names of participants and the amount of time requested (which may, of course, exceed 60 minutes). Address for mailing abstracts: Masayoshi Shibatani Faculty of Letters, Kobe University 1-1, Rokkodai-cho, Nada-ku Kobe 657, Japan E-mail: matt at icluna.kobe-u.ac.jp -- ************************************************************************ Edith A. Moravcsik Department of Linguistics University of Wisconsin-Milwaukee Milwaukee, WI 53201-0413 USA E-mail: edith at csd.uwm.edu Telephone: (414) 229-6794 /office/ (414) 332-0141 /home/ Fax: (414) 229-6258 From M.Durie at LINGUISTICS.UNIMELB.EDU.AU Sun Jan 19 05:33:25 1997 From: M.Durie at LINGUISTICS.UNIMELB.EDU.AU (Mark Durie) Date: Sun, 19 Jan 1997 16:33:25 +1100 Subject: form without meaning In-Reply-To: <32E04AC4.831@abacus.bates.edu> Message-ID: John Aske wrote: >Here I feel compelled to bring up an analogy from another science and I >hope I won't be unduly chastised for my boldness and my ignorance. I >just can't imagine that a biologist, for example, would attempt to >describe a particular organ in some organism without at the same time >attempting to understand its function (what it's for), how it may have >gotten to be the way it is, what it does, how it does it, how it >interacts with other organs of the body, and how it compares with the >way other organisms perform those functions. Yes they have done this, but still acknowledging the difference between the two kinds of task. Anatomy is the study of structure. Physiology is the study of function. The history of medicine shows that quite different methods and methodological difficulties were involved to explore the two areas. (The anatomists had the problem of getting enough bodies to dissect, and the physiologists had to get used to the idea of experimentation.) But it also shows that advances in understandings of structure and function influence and help advance each other in very complex ways that are hard to plan for or categorize. Mark Durie ------------------------------------ From: Mark Durie Department of Linguistics and Applied Linguistics University of Melbourne Parkville 3052 Hm (03) 9380-5247 Wk (03) 9344-5191 Fax (03) 9349-4326 M.Durie at linguistics.unimelb.edu.au http://www.arts.unimelb.edu.au/Dept/LALX/staff/durie.html From chafe at HUMANITAS.UCSB.EDU Mon Jan 20 04:47:23 1997 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Sun, 19 Jan 1997 20:47:23 -0800 Subject: History Message-ID: Maybe we're about done with this for a while at least, or maybe not, but I can't help thinking that a little historical perspective wouldn't hurt. Just as the synchronic state of a language can't be fully understood without reference to its history, the state of linguistics can profit from a little historical understanding too. Dont worry; I won't go on for very long. By historical accident I happened to be educated in linguistics while it was still dominated almost completely by "post-Bloomfieldians". I was taught some things I've always found useful, but two things I fairly quickly decided were wrong. One was anti-mentalism--the rejection of the mind that came from behaviorism, as colored by logical positivism and an excessively narrow view of what it meant to be "scientific". The other was the view that, even though everyone might admit that language is somehow related to all of human experience (cognitive, emotional, social, historical), there was an isolable part of it that could be studied all by itself, so linguists could happily be exempted from worrying about all the rest. That view had been heavily promoted by Bloomfield, with bows toward Saussure. Shortly after that there was a change in attitude concerning the mind, but that was about all, and the results were curious. There was no change in the view that some part of language could be isolated for scientific study, apart from all the rest. Linguistics came to be dominated by a search for the nature of that isolable thing within the mind, which had to be innate because its connections with everything else were held to be negligible. One unfortunate result was an all- consuming interest in universals with a corresponding disregard for ways in which languages differ, whereas the Bloomfieldians, much to their credit, had always been interested in those differences. Most linguists nowadays can hardly imagine the sneering that was directed in the early 1960s toward those who practiced "mere description". Much was made of "explanatory adequacy", but instead of working toward an understanding of language as shaped by socio-cognitive and historical forces, explanations took the form of tree diagrams. Tree diagrams that mysteriously changed their shape according to rules that also had no basis either in how people talk or in language history. The latest exchange makes we wonder if we haven't come full circle. Some would apparently now like to believe that it's OK to restrict one's interest to the isolable part, thereby accomplishing one's own kind of descriptive adequacy, while leaving more encompassing understandings to those who might be interested in that sort of thing, and who, quite ironically, might thereby succeed in achieving an explanatory adequacy very different from the kind proposed in the 1960s. But is it even possible to restrict oneself in that way? A lot of the discussion has revolved around that question. My own view is that the stuff of which syntax is made--its elements and the constructions into which they enter--is either functional stuff (and a great deal of it most certainly is), or consists of the fossilized (or partially fossilized) remains of things that were functional at an earlier time. I think that's what Jon Aske has been saying, and certainly his experiences accord with my own. If we are right, then a great deal of effort is being expended on the wrong thing, something not rare in human affairs, but something that's regrettable at this special moment in human history when most of the world's languages are about to disappear. Wally Chafe From dever at VERB.LINGUIST.PITT.EDU Mon Jan 20 11:39:10 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 20 Jan 1997 06:39:10 -0500 Subject: etc. In-Reply-To: <01IECVSCJTHU8ZKMY0@OREGON.UOREGON.EDU> Message-ID: Tom, Nice posting. But there are strong empirical and conceptual reasons for separating discourse and sentence-level syntax that have nothing to do with vestigial methodology from the 60s. If people want to pursue this topic, I would be happy to state my reasons. On the other hand, I need to state immediately that noone who ignores discourse and the core role that it plays in language and culture will go very far in understanding Language as a whole. But (sounding like Fiddler on the Roof dialogue) on the other hand there are many reasons for thinking that one can understand a great deal of grammar without understanding Language. -- Dan ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From dever at VERB.LINGUIST.PITT.EDU Mon Jan 20 11:50:03 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 20 Jan 1997 06:50:03 -0500 Subject: History In-Reply-To: Message-ID: Wally, I do not think that we have come full circle. I think we have gone in a straight line. Chomskian research has continued the Bloomfieldian practice (in fact we can drop Bloomfieldian and just say scientific) of isolating certain components of its empirical domain (language) for study, i.e. grammar. All functionalists do this too - nobody studies everything or ever plans to have a theory of everything. As Mark Durie points out, a priori any field runs the risk of committing egregious errors by slicing the pie the wrong way. The idea is that the robustness of the research results has got to be the guiding light, as it were. Those of us who divide the sentence from the discourse as a research strategy very often do this awake and consciously, believing that we know why we are doing it, and not simply because we are unaware of the history or alternative possibilities. The issue here is neither methodological nor historical, but ontological and empirical. The *only* way to really evaluate alternative research programs is in terms of the quality of their empirical production. Arguments must revolve around the empirical, case-by-case. That said, 'quality' is clearly subjective and at some point (which we are a long ways from), it will be like trying to convince each other that blue is prettier than red. -- Dan ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From vakarel at OREGON.UOREGON.EDU Tue Jan 21 07:58:34 1997 From: vakarel at OREGON.UOREGON.EDU (c. vakareliyska) Date: Mon, 20 Jan 1997 23:58:34 -0800 Subject: Second call for papers: national Slavic linguistics conference Message-ID: FIRST NORTHWEST CONFERENCE ON SLAVIC LINGUISTICS - May 17, 1997 Keynote speaker: Horace G. Lunt, Samuel Hazzard Cross Professor (emer.), Department of Slavic Languages and Literatures, Harvard University The first Northwest Conference on Slavic Linguistics, co-sponsored by the University of Oregon,the University of Washington, and the Oregon Humanities Center, will be held on Saturday, May 17, 1997, at the University of Oregon, in Eugene. The purpose of the conference is to provide a national forum devoted specifically to Slavic linguistics which includes all areas of theoretical linguistics and philology. A one-page paper abstract on any topic in theoretical Slavic linguistics or Slavic philology should be submitted by e-mail by **FEBRUARY 1, 1997** to James Augerot (bigjim at u.washington.edu), Katarzyna Dziwirek (dziwirek at u.washington.edu), or Cynthia Vakareliyska (vakarel at oregon.uoregon.edu). If necessary, abstracts may be faxed or mailed to C. Vakareliyska, Department of Russian, University of Oregon, Eugene, OR 97403 (fax: (541) 346-1327). The Eugene airport serves direct flights from San Francisco, Seattle, Portland, Denver, and Salt Lake City. Hotel accommodations are available within walking distance of the university; information concerning hotel reservations will be posted in February. An optional excursion to Crater Lake is planned for the day following the conference. Conference registration fee: $25. For further information, contact C. Vakareliyska. From David_Tuggy at SIL.ORG Tue Jan 21 06:59:00 1997 From: David_Tuggy at SIL.ORG (David_Tuggy at SIL.ORG) Date: Tue, 21 Jan 1997 01:59:00 -0500 Subject: syntax/semantics, form/ meaning Message-ID: I played hookey for a few days from my post in lurkitude (I liked that word!), and then came back to find that the Funknet had come alive! Seems like a fair bit of heat and maybe even a decent amount of light has been generated. Not sure I've absorbed all the light (or even all the heat) that might do me good, but thought I'd add my two bits' worth anyway. The first part of this is trying to understand what others have said in this discussion: I'd appreciate being straightened out if I've gotten it wrong. Also, I'm sorry it runs on so long, but I think slowly ... Parsers were interesting, but the questions that most seem to have vexed people's souls are two closely related ones (if they're not in some sense the same one): Is syntax autonomous, particularly from semantics? Can/should we study linguistic forms (structures) without reference to their function or meaning? As usual, the answers depend on the definitions you give to the technical terms and the assumptions or presuppositions on which those definitions are based. After some initial sparks, Tom Giv�n and George Lakoff seem to have settled it between them that neither of them really denies that there is such a thing as syntax, and they are in agreement that it is not autonomous. I don't know anybody else who denies that syntax exists, but Fritz Newmeyer and others are ready to argue that it is autonomous. The form/function thing came up in that context, and just about everybody seems happy to say "Yes, there is form and there is function, and they are tied together, but not so closely as to make them indistinguishable." And everybody seems to agree that both are worth investigating, but they're not all in agreement on whether there's any point in (or possibility of) investigating form without investigating function at the same time. In other words, are syntax and form separate or autonomous enough that we can profitably treat them on their own terms without bringing in functional or semantic information? The crucial definitions and underlying assumptions include "syntax", "semantics", "form", "function", and "autonomous". We assume we know what each other means, and build arguments, or appeal to analogies with other sciences such as biology, all of which makes sense given our views of these key words but those analogies and arguments will not be as forceful to someone who means something different by them. A standard position (I think) is that "semantics" is "truth-conditional semantics", and syntax includes word-order or phrase-order information, constituency information, "grammatical class" information, "grammatical relations", gender and agreement phenomena, and such-like. All of these things are considered non-semantic in nature. In doing syntax you pay attention to semantics only enough to let you know if two structures (usually sentences) are synonymous (i.e. have the same truth conditions) or not, but basically you ignore it. As Dick Hudson put it, syntactic generalizations "refer to words, word-classes and syntactic relations, without mentioning meaning (or phonology)." So what does it mean for syntax to be "autonomous" from semantics? For Fritz it seems it is enough to prove that some generalizations about syntactic patterns follow from other syntactic patterns or primitives: "A[utonomous]S[yntax] holds that a central component of language is a *formal system*, that is, a system whose principles refer only to formal elements and that this system is responsible for capturing many (I stress 'many', not 'all') profound generalizations about grammatical patterning." Similarly for Dick Hudson "The generalisations that distinguish auxiliary and full verbs are `autonymous'[sic], in the sense that they refer to words, word-classes and syntactic relations, without mentioning meaning (or phonology)." For Tom Giv�n and some others it seems to be enough to prove that some generalizations about syntactic patterns follow from something other than syntactic patterns or primitives to show that syntax is *not* autonomous. Thus Tom says "Grammar is heavily motivated by semantic/communicative functions. But -- ...it never is o100% so. It acquires a CERTAIN DEGREE of its own independent life. This, however, does not mean 100% autonomy." Yet Tom agrees with George that syntax is non-autonomous. Wally Chafe's position is similar, I think (I loved this posting, Wally): "My own view is that the stuff of which syntax is made--its elements and the constructions into which they enter--is either functional stuff (and a great deal of it most certainly is), or consists of the fossilized (or partially fossilized) remains of things that were functional at an earlier time." The positions seem pretty close to me: they are separated by the definition of autonomy. For the AS people anything that's not explainable by semantics /pragmatics/ function/non-linguistic cognition/etc. proves autonomy: for the Giv�n-functionalists anything that is proves non-autonomy. Both agree that some important generalizations are "syntactic", not to be accounted for by semantics/pragmatics/ function/etc. Of course they differ as to how much is to be accounted for in which way, and in the importance they assign to what is accounted for in each way, but they seem to be playing basically the same game, or at least on the same field. I guess I'm agreeing with Nick Kibre: "Ultimately, it seems that the autonomous syntax and and functionalist/cognitive position are more edges of a continuum than strictly opposing viewpoints. Nearly veryone agrees that language is shaped both by innate cognitive mechanisms, at least partially specialized for linguistic function, and by the demands of usage; our only point of disagreement is how tightly the former constraints the range of possible systems, and how much regularity is due to the pressures of the latter." And form? I think both these camps basically agree that what is not predictable from semantics/pragmatics/function/cognition/etc. is formal. Whatever is ossified, fossilized, so just because that's the way speakers of this language do it and not because of some semantic/pragmatic/etc. necessity, is formal. And syntax is the major domain of such formality, I think, for both. (The lexicon too, I suppose, though I'm not as sure.) Or am I oversimplifying? Matthew Dryer wrote that "autonomous syntax" means for some (1) something about innateness (which I don't want to talk about), and "(2) one can explain syntactic facts in terms of syntactic notions (3) syntax/grammar exists (although that too can mean different things)" "Arguments for the autonomy of syntax," Dryer continues, "(such as some offered in print by Fritz Newmeyer) often involve no more than arguments for (3). For me (and I assume that this was what both George and Tom meant), rejecting autonomy of syntax involves rejecting (1) and (2)." Matthew, are you saying that no syntactic facts can be explained in terms of syntactic notions, or that not all can, or what? And although I hate to bring any division between them when they've made up so nicely, I'm not sure at all that George and Tom both mean the same thing. It seems to me that the position that George alluded to and somewhat described, the one he and Ron Langacker share (and which it suits me to work from), is different from what's been said so far. Under this view, syntax is non-autonomous in a much more radical sense. But a rethinking, i.e. different definitions, of the basic concepts, is necessary. Semantics is not limited to what truth-conditions show us, but extends to virtually any kind of cognitive activity: it includes all kinds of "imagery" and "construal" factors, degrees of prominence of parts of a concept vis-a-vis each other or of one concept vis-a-vis another, attitudinal (emotional) information, etc. Significantly, it includes relations of one concept to another (e.g. the relation of an actor to the act he performs can be semantic). Semantics and pragmatics, to the extent that it is useful to distinguish them, differ only in degree, not in kind, and for most purposes what others call "pragmatics" is included as part of semantics. Categories are not expected to be classical, airtight, all-or-nothing compartments, but rather are organized around prototypes, with fuzzy borders and surprising extensions. Any concept of these sorts becomes *linguistic* (i.e. part of an established *language*) by being cognitively "entrenched", i.e. routinized, ossified, fossilized, in some degree, and "conventionalized", i.e. shared and known to be shared by a community of speakers. Although I don't remember George or Ron using the word in this way, "formalization" is what the other folk seem to be calling this process, and I think we can usefully make the connection. "Formalization", then, is a function, and a very important one: we couldn't talk coherently to ourselves or to others without it, and all linguistic structures undergo it to some degree. "Formalization" in this system does not necessarily mean arbitrariness, however. What is entrenched typically is entrenched because it makes sense, it functions well. In general there is claimed to be a gradation between the fully arbitrary and the totally predictable, with most language phenomena in between, in the "motivated" or "reasonable" part of the cline. Some linguistic patterns may in fact hold *only* because "that's the way we do it"; most also have some element of "we do it because ...". In either case, they are established, "formal" patterns. As Ron puts it, it is important to distinguish between what kinds of structures there are (only semantic, phonological, and symbolic, says he), and the predictability of their behavior. Few if any are fully predictable apart from "that's just the way we do it"; even if that is the only motivation left, the structures do not cease to be semantic, phonological, or symbolic. Phonology, under this framework, is also a subpart of conventionalized, entrenched cognition, namely the part that deals with the motor and auditory routines that constitute our pronunciations and perceptions or linguistic sounds. Included in it are the timing relations that constitute the order of production of phonological structures. (Note that much non-linguistic cognition can also become entrenched and even conventionalized: e.g. the motor routines of driving a stick-shift car constitute a quite complex and flexible system that is, in this sense, "formal".) Lexical items, reasonably uncontroversially, are claimed to have a semantic structure (i.e. for L&L a non-classical category of entrenched conventionalized cognitive routines) which is (again by an entrenched, conventionalized cognitive routine) linked in a symbolization relationship to a phonological structure. These structures, and their symbolic relation, are "formal" from the start. So proving that something is "formal" doesn't make it non-semantic or non-phonological, requiring us to make a third category of the "syntactic". And in fact, the claim is that syntactic constructions, of whatever degree of complexity or abstraction, are basically of the same ilk as simple lexical pairings. They differ from them only in degree, not in kind; consisting, as lexical items do, in the symbolic pairing of a semantic with a phonological structure. Syntax thus is non-autonomous not in that some of it can be accounted for by non-formalized stuff: it is non-autonomous in that all of it (so the claim is) can be accounted for by the same sort of formalized cognition that constitutes semantics and phonology. Wally Chafe's words come back to mind: "the stuff of which syntax is made--its elements and the constructions into which they enter--is either functional stuff (and a great deal of it most certainly is), or consists of the fossilized (or partially fossilized) remains of things that were functional at an earlier time." George and Ron would say "it is semantic, phonological or symbolic stuff. In some of it some functional motivation besides pure convention can be seen: some of it may indeed be fossilized to the point where that is about all that is left, but it is still the same kind of thing." The arguments I remember being given for the non-semantic nature of the syntactic "stuff" were based on two ideas: the notion that semantics is only truth-value stuff (and in fact pretty nearly only real-world referential stuff), and the notion of classical, all-or-nothing categories. Thus most of us were told in our first-year grammar or syntax courses, "Yes, you might think there was a semantic basis for, say the 'noun' category, and indeed a word designating a person, place or thing usually is a noun. But some nouns name actions, and so the definitions don't work." Since semantics (i.e. truth-conditional semantics) can't predict 100% of the cases, the category is non-semantic: what else could it be but syntactic? I.e. the only thing that will do to define it is the way it interacts with its linguistic neighbors." But if categories needn't be all-or-nothing, the argument fails. For surely, if you allow prototype-based categorization, the category 'noun' prototypically does in fact mean something pretty near to the traditional "person, place or thing", and other cases, even those denoting actions, though correctly seen as non-prototypical, are pretty clearly relatable to this prototype. The same works for 'verb' and other categories (See e.g. Ron's 1987 "Nouns and Verbs" in Lg. 63.53-94; he actually goes beyond this and proposes schematic concepts that do pretty well at covering all the cases, claiming e.g. that we conceptualize an action differently when we refer to it via a noun or a verb). The relations between nouns and verbs fit in beautifully, since the way something interacts with its linguistic neighbors is in fact part of cognition, and to the extent that that those interactions are entrenched and conventionalized, they are linguistic, i.e. semantic, phonological, or symbolically linking the two. Yes, a category such as 'subject' doesn't always mean 'agent', but if you include volitional agents as prototypical subjects, and organize the category around them, it works out quite satisfactorily. Other supposedly "syntactic" concepts fit in the framework just as well. The phonological nature of these syntactic structures bears comment. I don't remember ever being given an argument for the non-phonological nature of syntax. And yet, why is word order (or phrase or clause order, or morpheme order) any less a *phonological* fact than phoneme order? Under this perspective it is a phonological fact. And in any particular construction that phonological order relationship symbolizes a semantic relationship (e.g. that of a verb to its subject or object.) Both semantic and phonological structures can be so schematic (underspecified) that they are not useful alone, but are useful for specifying patterns, yet those patterns differ only in degree, not in kind, from fully-specified structures. Anyway, it seems to me that Langacker and Lakoff's position on this is different from the others I've been reading. It does not deny (in fact it insists) that linguistic structures are formal(ized), but it says that is true of everything, not just syntax. Certainly it allows for many "profound generalizations" to be accounted for by the syntactic relations of one structure to another, but it insists that those syntactic relations themselves are non-autonomous in the sense that they are the same kind of thing as the rest of language, i.e. they consist of conventionalized, entrenched, "formalized" cognitive patterns, either phonological (related to speech sounds) or semantic. Dan Everett says that we should compare models by their "empirical production". "Arguments must revolve around the empirical, case-by-case." I guess he means by their success in dealing with real language data. If he does, I think I mostly agree. I've liked Ron's and George's model because it helps me deal with my real language data where the others I've tried didn't. But it helps me to try to sort out (as I've tried to here), who's claiming what, before I can evaluate what particular data really prove with regard to each model. David Tuggy From edith at CSD.UWM.EDU Wed Jan 22 20:06:50 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Wed, 22 Jan 1997 14:06:50 -0600 Subject: form Message-ID: =====> To JON ASKE: Thanks for your two responses (Friday, January 17 and Saturday, January 18). I think we are in agreement on the basic fact that a full account of grammar includes consideration of both form and function. Where we disagree is that I believe within this total endeavor there is a distinct step devoted to the study of form independent of meaning, while you are questioning this. -- ************************************************************************ Edith A. Moravcsik Department of Linguistics University of Wisconsin-Milwaukee Milwaukee, WI 53201-0413 USA E-mail: edith at csd.uwm.edu Telephone: (414) 229-6794 /office/ (414) 332-0141 /home/ Fax: (414) 229-6258 From bates at CRL.UCSD.EDU Thu Jan 23 06:08:16 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Wed, 22 Jan 1997 22:08:16 -0800 Subject: form Message-ID: I just thought I would include, for the edification of all on this list, a quote that I just read from this week's Newsweek Magazine, from our very own Fritz Newmeyer. Anyone care to add this tidbit to the running discussion? -liz bates The new millennium will also bring the discovery of genes for specialized bits of language. Already, researchers have found a genetic mutation that shows up in an inability to put suffixes onto words: people who carry the gene cannot add "-s" or "-er" or _ed" to words, explains [linguistic Fritz] Newmeyer [of the University of Washington]. "In the next century we will locate other aspects of language in the genes," he believes. Could a gene for the subjunctive be far behind? Next time you don't know whether it's "if she was" or "if she were," you'll be able to blame your DNA. from Sharon Begley, "Uncovering secrets, big and small". In Beyond 2000: America in the 21st Century. Newsweek, January 17, 1997, pp 63-64. From fjn at U.WASHINGTON.EDU Thu Jan 23 15:56:54 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Thu, 23 Jan 1997 07:56:54 -0800 Subject: form In-Reply-To: <199701230608.WAA28417@crl.UCSD.EDU> Message-ID: I hope it is clear that the quote stops where the quote stops. Not only did I NOT speculate that there might be a 'gene for the subjunctive', I patiently explained to the reporter that such would be utterly implausible and was careful (I thought) to dissociate any claims about a genetic basis for grammar from discussions of prescriptivism, Ebonics, and whatever else the public might associate with the notion 'grammar'. But, alas,... --fritz On Wed, 22 Jan 1997, Elizabeth Bates wrote: > I just thought I would include, for the edification of all on this > list, a quote that I just read from this week's Newsweek Magazine, > from our very own Fritz Newmeyer. Anyone care to add this tidbit > to the running discussion? -liz bates > > > The new millennium will also bring the discovery of genes for > specialized bits of language. Already, researchers have found > a genetic mutation that shows up in an inability to put suffixes > onto words: people who carry the gene cannot add "-s" or "-er" > or _ed" to words, explains [linguistic Fritz] Newmeyer [of the > University of Washington]. "In the next century we will locate > other aspects of language in the genes," he believes. Could a gene > for the subjunctive be far behind? Next time you don't know whether > it's "if she was" or "if she were," you'll be able to blame your DNA. > > from Sharon Begley, "Uncovering secrets, big and small". In Beyond 2000: > America in the 21st Century. Newsweek, January 17, 1997, pp 63-64. > From kemmer at RUF.RICE.EDU Thu Jan 23 17:24:45 1997 From: kemmer at RUF.RICE.EDU (Suzanne E Kemmer) Date: Thu, 23 Jan 1997 11:24:45 -0600 Subject: Graduate fellowships at Rice Message-ID: The Linguistics Department at Rice University (home of Funknet!) encourages applications from well-qualified students for admission to our Ph.D. program in Linguistics for 1997-98. The Ph.D. program at Rice emphasizes the study of language use, the relation of language and mind, and functional approaches to linguistic theory and description. Areas of intensive research activity in the department include cognitive/functional linguistics; in-depth study of the languages of North and South America, the Pacific, and Africa; language universals and typology; language change and grammaticalization studies; lexical semantics; corpus linguistics; computational modeling; neurolinguistics; discourse studies; and second language acquisition. The department offers support in the form of tuition waivers and fellowships to qualified doctoral students. Both U.S. and international applicants are admitted on the same basis, and financial aid is not restricted to U.S. citizens. Current doctoral candidates include not only U.S. students but also students from Australia, Brazil, China, Germany, and Korea. Prospective students of diverse linguistic backgrounds are encouraged to apply. Admission and fellowships are awarded on a competitive basis. Students enjoy access to departmental computer facilities; the department and university's excellent Linguistics collections (including a huge library of descriptive grammars); funds for conference travel; and photocopying accounts. Graduate housing next to campus is available; students can also take advantage of the affordable rental market in Houston, the nation's fourth largest city. With its many immigrant communities, the city provides not only wonderful opportunities for fieldwork, but also for (affordably) sampling a vast array of international cuisines. Applications are available from the following addresses: EMAIL: ukeie at ruf.rice.edu REGULAR MAIL: Ursula Keierleber, Coordinator Department of Linguistics Rice University 6100 Main St. Houston TX 77005-1892 TELEPHONE: (713) 527-6010 Graduate Record Examination scores must be received by the department as soon as possible. Two letters of recommendation from relevant faculty are also required. Applications should be received by February 1, 1997. For more information about the program, see the department's WEB PAGE at: http://www.ruf.rice.edu/~ling Subpages include: Department--the basic info about the orientation of the department People--Faculty, students, staff, visitors Activities--Research Projects, Funknet, Distinguished Speakers/Colloquium series, Biennial Symposia etc. Programs--Graduate and Undergraduate Degree Programs Courses--This year's course schedules From bates at CRL.UCSD.EDU Thu Jan 23 18:19:56 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Thu, 23 Jan 1997 10:19:56 -0800 Subject: form Message-ID: But shall we assume (and I promise NOT to say "Dear Fritz...") that you DID endorse the claim that the genetic basis of grammatical morphology has been discovered? That claim has made the rounds for several years, it is all based on a premature report about the K Family in London, and the report is stunningly wrong. Faraneh Vargha-Khadem and her colleagues, who have studied this family for years and were in no way responsible for the original report (a letter to Nature by Myrna Gopnik) published a thorough study of the family in 1995 in the Proceedings of the National Academy of Sciences, showing that there is absolutely no dissociation between regular and irregular morphology, or (for that matter) between grammar and other aspects of language, because the affected members of the family are significantly worse than the unaffected members on a host of different languages tests and on a number of non-linguistic measures as well. They also have a serious form of buccal-facial apraxia, i.e. they have a hard time with complex movements of the mouth, so severe that some members of the family supplement their speech at home with a home signing system. A separate paper (not in the Proceedings) shows that they also have a hard time (relative to familial controls) with a finger-tapping task! Imagine the following tabloid headline: "Elton John and Lady Diana spent hot night together in Paris." You buy the paper, open up to page 17, and discover that they spent the night together with 400 other people at a party. Kind of changes the interpretation, no? In the same vein, the grammatical deficits displayed by the affected members of this family are part of a huge complex of deficits, in no way specific to grammar much less to specific aspects of morphology. But the rumor continues to be passed around....-liz From fjn at U.WASHINGTON.EDU Thu Jan 23 19:53:30 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Thu, 23 Jan 1997 11:53:30 -0800 Subject: form In-Reply-To: <199701231819.KAA03416@crl.UCSD.EDU> Message-ID: The Vargha-Khadem, et al. paper that Liz refers to is nothing less than scandalous. This 3 1/2 page (!) paper, which Liz calls 'thorough', was published in 1995, yet refers to no work by Myrna Gopnik and her group on the K family that was published after 1991. In that period, they carried out dozens of tests on the family that directly contradict the claims of Vargha-Khadem, et al. The slipperiest thing that V-K do is to imply that nongrammatical problems manifested by one (or some) of the affected family members are manifested by *all* of the affected family members, giving the illusion that there is a broad syndrome of problems associated with the inability to handle inflectional morphology. In fact, there is none. The low IQ scores for the affected members reported by V-K not only contradict the scores reported by Gopnik and her associates, but also contradict the scores published by *Varga-Khadem's own research group*. There is no explanation for this discrepancy; in fact, there is no evidence that the affected members of the family have statistically significantly lower IQs than the nonaffected members. The 'intelligibility' problems reported by V-K and repeated by Liz appear to be almost entirely a function of the testing situation. The V-K group brought the (uneducated working-class) family members into a laboratory and pointed bright lights and video cameras at them. Relaxed settings (party-like atmosphere in the subjects' homes) revealed vastly improved articulatory abilities and few of the other problems reported by V-K. Bill Labov taught us linguists decades ago about the importance of a nonthreatening environment if one wants to assess natural speech. Few psychologists, it would seem, have learned the lesson. Implications that an auditory processing deficit is responsible for the dysphasia cannot be correct. Affected members of the K family perform excellently on phoneme-recognition tasks, and, moreover, have no difficulty perceiving unstressed word-final segments that mimic the form of inflectional suffixes (e.g. the final alveolar in words like 'wand'). Furthermore, whatever deficit the affected family members might have in articulation, it could hardly explain why they make errors with suppletive past tenses ('was', 'went') and with irregular pasts, regardless of the sound that happens to occur in final position ('took', 'drove', 'got', 'swam', etc.). And, and Goad and Gopnik have pointed out, 'it is very hard to see how articulatory problems could prevent them from making correct grammaticality judgments or ratings which require them to just nod yes or no or to circle a number'. I could write much much more, but instead will refer you to an upcoming special issue of Journal of Neurolinguistics, in which this issue will be discussed in detail. --fritz On Thu, 23 Jan 1997, Elizabeth Bates wrote: > But shall we assume (and I promise NOT to say "Dear Fritz...") that you > DID endorse the claim that the genetic basis of grammatical morphology > has been discovered? That claim has made the rounds for several years, > it is all based on a premature report about the K Family in London, and > the report is stunningly wrong. Faraneh Vargha-Khadem and her colleagues, > who have studied this family for years and were in no way responsible > for the original report (a letter to Nature by Myrna Gopnik) published > a thorough study of the family in 1995 in the Proceedings of the National > Academy of Sciences, showing that there is absolutely no dissociation > between regular and irregular morphology, or (for that matter) between > grammar and other aspects of language, because the affected members of > the family are significantly worse than the unaffected members on a host > of different languages tests and on a number of non-linguistic measures > as well. They also have a serious form of buccal-facial apraxia, i.e. > they have a hard time with complex movements of the mouth, so severe > that some members of the family supplement their speech at home with > a home signing system. A separate paper (not in the Proceedings) shows > that they also have a hard time (relative to familial controls) with > a finger-tapping task! > > Imagine the following tabloid headline: "Elton John and Lady Diana > spent hot night together in Paris." You buy the paper, open up to > page 17, and discover that they spent the night together with 400 > other people at a party. Kind of changes the interpretation, no? > In the same vein, the grammatical deficits displayed by the affected > members of this family are part of a huge complex of deficits, in no > way specific to grammar much less to specific aspects of morphology. > But the rumor continues to be passed around....-liz > From john at RESEARCH.HAIFA.AC.IL Fri Jan 24 06:40:29 1997 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Fri, 24 Jan 1997 08:40:29 +0200 Subject: Newmeyer's quote Message-ID: I think it would be a good idea for linguists to just keep our mouths shut if the popular press comes looking for quotes about language and genetics. It's obvious, to me at least, that, unless we know for sure in advance that we will have 100% control over exactly what is printed, we are going to come across as having a racist agenda. We know that this isn't true, but after Newmeyer's quote in Newsweek, I think that many non-linguists are going to think either that linguists are racists or that linguistic research shows that black-white differences in speech are genetically-based. It isn't enough to say `Alas.' Let's think first and talk on the record later or not at all. We don't need to be so desparate to see ourselves in the news. John Myhill From ocls at IPA.NET Fri Jan 24 15:28:24 1997 From: ocls at IPA.NET (George Elgin, Suzette Haden Elgin) Date: Fri, 24 Jan 1997 09:28:24 -0600 Subject: newmeyer's quote Message-ID: On January 24th Dr. Myhill wrote: "I think it would be a good idea for linguists to just keep our mouths shut if the popular press comes looking for quotes about language and genetics. It's obvious, to me at least, that, unless we know for sure in advance that we will have 100% control over exactly what is printed, we are going to comeacross as having a racist agenda. We know that this isn't true, but after Newmeyer's quote in Newsweek, I think that many non-linguists are going to think either that linguists are racists or that linguistic research shows that black-white differences in speech are genetically-based. It isn't enough to say`Alas.' Let's think first and talk on the record later or not at all. We don't need to be so desparate to see ourselves in the news. John Myhill" I agree with most of what Dr. Myhill says here, and understand the parts with which I do not agree. However, I am much afraid that it's just not this simple. True, the media will grab whatever part of an interview seems to have the most "legs" and will use that, no matter how many warnings are given; true, much of the time there's no way to control what is printed. Even when the reporter has agreed to the interviewee's constraints, the editors/publishers often overrule that agreement and do whatever they think will move copies or raise ratings. And it's not that they're indifferent to the fact that what they're doing is dangerous, it's that they haven't the least *idea* that it is. That's all true. But the charge that linguists have a racist agenda is not the only image problem we have. The average level of accurate information about language and linguistics in the general public is at Flat Earth level, and I am not just talking about "the masses." In the current "ebonics" mess, for example, the ghetto children have plenty of excuses for *their* ignorance; the allegedly educated adults from every walk of life who are pontificating in the public press on the subject have none. I respect each and every linguist's individual right to respond to this problem of public ignorance with "So what? It's not my problem and it doesn't interest me." Or with "If I tried to do something about it, I'd be misquoted -- what's the use?" Or both. But I don't, personally, feel that way about it. That ignorance has serious real-world consequences; we're all paying for those consequences. Language is our science; it seems to me that linguists have some responsibility in this matter. Because I am of the opinion that it *is* my problem, I do many interviews every year. (Much of the time I am misquoted, to some degree; quite right.) At least half the interviews begin with someone (or several someones) saying to me, "I hate interviewing linguists. They're elitists, they look down on everybody who isn't a linguist, they can't even get along with each other, and they can't be bothered to speak English." Followed, often, by the ultimate insult: "You people are worse than *doctors*!" I will never forget a conference on bilingual education in the seventies where the then secretary of education -- allegedly an educated adult -- got up for the keynote address and announced that "the reason bilingual education has failed in the United States is because the linguists have refused to help." It is still the case, after all these years, that when I go into a school and people are told that I'm a linguist, they say one of two things: " I'm afraid to talk to you, because all linguists do is watch for people to make mistakes" or "I don't want anything to do with linguists -- they're responsible for the mess we're in." I got an email message last year in which an academic who'd been flamed on Linguist List for asking a question informed me that that was the last time *he* intended to open his mouth in front of "Your Linguistnesses." With all due respect, it seems to me that perhaps being desperate to see ourselves in the news -- after taking time to think carefully, as Dr. Myhill stipuates -- is not necessarily such a bad idea. Suzette Haden Elgin From lgarneau at HOTMAIL.COM Fri Jan 24 17:34:58 1997 From: lgarneau at HOTMAIL.COM (Luc Garneau) Date: Fri, 24 Jan 1997 17:34:58 -0000 Subject: agree with suzette haden elgin Message-ID: While I prefer not to comment to almost ANYONE on the whole ebonics issue (because most non-linguists don't really understand what ebonics entails anyway), I have had similar experiences with people regarding the popular concept of what a "linguist" is. When I mention to people that I have an MA in linguistics and teach English/Linguistics part-time at National-Louis University in Evanston, they assume I am a strict prescriptivist, and are very concerned about talking to me about things. Suzette's "I am afraid to talk to you -linguists just look for mistakes" (pardon my approximate quote) comment is one I have encountered more than once! When I explain that I am more of a descriptivist, then go on to explain the descriptive/prescriptive distinction, they go the opposite extreme and assume that I am a proponent of "bad grammar". It's tough sometimes, studying such a constantly changing subject. As to the responsibility of linguists...I will wait to see what others say before commenting! Luc Garneau Adjunct Instructor of English National-Louis University Evanston, Illinois e-mail: lgarneau at hotmail.com --------------------------------------------------------- Get Your *Web-Based* Free Email at http://www.hotmail.com --------------------------------------------------------- From john at RESEARCH.HAIFA.AC.IL Sun Jan 26 08:39:53 1997 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Sun, 26 Jan 1997 10:39:53 +0200 Subject: predictions Message-ID: Brian MacWhinney wrote: > >It seems to me that it is a free country and anyone can say anything that >they want, as long as it is not libelous. I'm sure that when Fritz was >being interviewed he told the reporter that linguists and psycholinguists >disagreed sharply on the interpretation of the genetic data. And probably >the reporter just decided to ignore his remarks on that issue. And >undoubtedly Fritz, like many of us who have been in a similar position, was >shocked to see how his story was reported. > First of all, I am not questioning FN's legal right to say what he said; I hope that I was not interpreted as suggesting that, and I assume that BW does not believe there are no non-legal bases for criticizing actions. Secondly, if your feeling is that it is so likely that you will be shocked at how your story is reported, maybe you better not say anything; if your purpose for cooperating with the interview is to give information to the wider public, and there is an expectation that things will get screwed up, why say anything? Leave this to people who are SERIOUS about interacting with non-linguists and will respond to misquotes NOT by throwing up their hands and saying 'I was misrepresented' but by devoting a major effort to correcting the situation, if necessary sacrificing their research agendas and position within the field for this purpose. John Myhill From TGIVON at OREGON.UOREGON.EDU Sun Jan 26 20:47:53 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Sun, 26 Jan 1997 12:47:53 -0800 Subject: Chinese Linguistics Job Message-ID: From: IN%"tomlin at OREGON.UOREGON.EDU" 25-JAN-1997 12:52:23.70 To: IN%"tgivon at OREGON.UOREGON.EDU" CC: Subj: ad copy for AAS Newsletter Return-path: Received: from [128.223.36.229] (lhuff.uoregon.edu) by OREGON.UOREGON.EDU (PMDF V5.0-5 #18639) id <01IEMPQQYJ4W8ZJLW6 at OREGON.UOREGON.EDU> for tgivon at OREGON.UOREGON.EDU; Sat, 25 Jan 1997 12:52:22 -0800 (PST) Date: Sat, 25 Jan 1997 12:52:22 -0800 (PST) Date-warning: Date header was inserted by OREGON.UOREGON.EDU From: tomlin at OREGON.UOREGON.EDU (Russell S. Tomlin) Subject: ad copy for AAS Newsletter To: tgivon at OREGON.UOREGON.EDU Message-id: MIME-version: 1.0 Content-type: text/plain; charset="us-ascii" Content-transfer-encoding: 7BIT >Date: Thu, 23 Jan 1997 11:21:36 -0800 (PST) >From: Risa Haberman Tom-- Here, finally, is the announcement for EALL. Can you get it posted on both FunkNet and LINGUIST? Thanks. --Russ ________________________ Position Announcement: University of Oregon,Department of East Asian Languages & Literatures > >The Department of East Asian Languages and Literatures seeks a dynamic >scholar and teacher who is a specialist in the area of Chinese >sociolinguistics or cultural linguistics. We are hoping to build upon our >strengths with a candidate whose research agenda contributes to a program in >literary, theoretical, and cultural studies. > >Responsibilties: > > The candidate will teach undergraduate and graduate courses in Chinese >linguistics, cultural linguistics, and language pedagogy as well as managing >and directing the Chinese language program. The position also includes >coordinating and directing the fall term orientation for graduate teaching >fellows in Chinese, organizing the summer Chinese language program, and >active research and publication in the field of Chinese sociolinguistics or >cultural linguistics. > >Qualifications: > >Ph.D. or ABD in Chinese linguistics. Native or near-native fluency in >Mandarin Chinese and English, attested ability and recent experience in >directing and participating in a large undergraduate Chinese language >program, the ability to teach undergraduate and graduate courses in Chinese >linguistics, cultural linguistics, and language pedagogy, demonstrated >expertise and graduate work in foreign language education, and formal >graduate training in Chinese cultural studies. We will give priority to >candidates with demonstrated expertise and graduate work in foreign language >education, and formal graduate training in Chinese culture. Candidates with >a well-developed research portfolio and direction are encouraged to apply. > >Applications due by: February 1, 1997 >Send letter of application, vita, and three letters of reference to: Chinese >Search Committee, Dept. of East Asian Languages and Literatures, University of >Oregon, Eugene, OR 97403. > > From fjn at U.WASHINGTON.EDU Mon Jan 27 17:03:09 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Mon, 27 Jan 1997 09:03:09 -0800 Subject: Message from Myrna Gopnik Message-ID: Dear Funknet subscribers, Myrna Gopnik has read the exchange in Funknet regarding the 'K family' and the general question of genetic dysphasia, and was kind enough to ask me to forward to you the following message: --fritz ---------- Forwarded message ---------- Date: Mon, 27 Jan 1997 12:35:42 EST5EDT From: GOPNIK at LANGS.Lan.McGill.CA To: fjn at u.washington.edu Subject: reply Over the last few years SLI has become a hot topic because it may have the potential to tell us something about the biological basis of language. In a recent exchange Liz Bates has raised some questions about this research. I am sorry to say that comments about this research have often generated more heat than light. If we are really interested in the science of it all then it is important to get the issues out on the table and see which ones we can agree about, which ones are still outstanding and how we could resolve them. So this note is intended not so much to cite data as it is to at least make a stab at clarifying some of the issues. (I will not be able to resist citing a little data and I would be glad to respond to any request for more details.) Our research program over the years has been clear: start with broad ranging, linguistically significant tests; examine the results; construct linguistically sound hypotheses; design new, hard tests, which sometimes require looking at new languages; look at the results; refine the hypotheses and start all over again. And it has worked. Bates appears to fault us for not always using standardized tests, but those tests are useless for addressing new hypotheses. For example, the original data from English, Japanese and Greek told us that language impaired subjects had particular trouble with inflections like tense. The linguistic question was whether it was inflectional rules or morphological complexity that was the problem. Would they have as much difficulty in finding the root in a complex word as they clearly had in adding an inflection? There was no way to test this hypothesis in English, but Jenny Dalalakis pointed out that there was a way of testing it in Greek; you have to be able to extract roots out of inflected forms in order to construct new compounds or diminutives. No one had ever looked at this before so she had to construct totally new tests, find out if and when young children could do these tasks, and then try them out on impaired unilingual native speakers of Greek. Jenny Dalalakis's innovative work on compounds and diminutives in Greek has made it clear that the language impaired subjects have just as much difficulty with finding the root of a complex word as they have with adding an inflection. And they have these problems in nouns and adjectives and not just on verbs so the problem cannot be accounted for by agreement or optional infinitives. Had we taken the easy route and just stuck with standardized tests we would not have been able to address these linguistically significant questions. Testing new hypotheses, with new tests on new languages is a risky business, but it is the necessary path if you want to find out new truths and not merely confirm old models. The aim, after all, is not merely to tally up the numbers from tests, but to use these results to construct hypotheses about the internal grammar of the individual that is producing these results. From this point of view the contrast of "deviant" vs. "delay" is not easily interpretable. We know that at an early stage of language development children treat inflected words as if they were unanalyzed chunks and it looks like language impaired individuals do the same. But there are other huge differences in their grammars with respect to the lexicon, syntax and compensatory strategies. On what grounds can this one particular similarity lead us to say that their grammar is "delayed" and not "deviant" (especially since we know that this "delay" lasts at least until age 76)? After eight years, hundreds of tests, thousands of data points, and almost a hundred impaired subjects representing four different native languages (English, French, Greek and Japanese), we are convinced that the data converges to tell us that, among other things, the language impaired subjects cannot construct normal representations for grammatically complex words and they therefore cannot use rules which depend on the content of these representations. Let me give you just one example of the kind of cross-linguistic evidence that we have gathered. If a subject produces a form like "walked" that appears to be inflected, we cannot tell whether this form is grammatically complex in their mental lexicon or whether it is merely a single unanalyzed chunk. One of the ways we have studied this is to see if, given a novel word, the subject could produce complex novel forms. Ability to mark novel words grammatically In each of these tests the subjects were given a context, usually in pictures, which required that a grammatical rule be applied to a novel word: This pencil is weff. This pencil is even _____. % correct CONTROLS IMPAIRED PAST TENSE English (in England) 95.4 38.0 English (in Canada) 93.5 52.3 Greek 87.1 20.0 French 92.6 33.3 Japanese 89.1 37.0 PLURALS English (in England) 95.7 57.0 English (in Canada) 99.2 58.3 Greek 79.8 42.1 COMPARATIVES English (in England) 74 21 COMPOUNDS Japanese 80.5 20.2 Greek 93.6 12.8 DIMINUTIVES Greek 83.9 40.2 These data clearly and convincingly show that the language impaired subjects are significantly worse at producing complex forms from novel words than are unimpaired subjects for every grammatical task and for every language population that we have tested. (We have closets of data and drawers of papers about a wide range tests that we would be glad to send to anyone who is interested. N.B. we have already sent out over 200 copies of the McGill working papers that Bates refers to, so it is not exactly hard to get). Are we finished with our work? Not by a long shot. We have lots more questions still to be answered just about morphology and though we know that they have problems with prosody and syntax we do not understand the complete picture yet. There are more things in language impairment than are accounted for in our hypotheses. But we think that our program of research has made real progress and that eventually the truth will out. Bates once more raises two specific issues with respect to the K family: oral apraxia and low IQ. I have responded to them often before, but they are harder to put to rest than old Hamlet's ghost. I will try once more. There seems to be three different ways in which these issues have been used 1. to imply that I have misreported the facts about the K. family and therefore my work should not be trusted, 2. to claim that the apraxia (Fletcher, 1990) or the low IQ (Vargha-Khadem) are the direct cause of the language problems, 3. to argue that this is not primarily a disorder of language because the members of the K family have other problems. The first issue is easy to clarify. I am a linguist, therefore I depended on reports from other experts with respect to apraxia and IQ. A neurologist, Dr. T. Ashizawa, examined the oral movements of several of the language impaired members of the K. family in the standard way and reported that they had normal oral motor function. Vargha-Khadem herself reports that the signs of apraxia show up only when the subjects have to perform several oral activities at the same time. The original IQ tests which I reported were done by the school psychologists who, over several years, found that all of the children were in the normal range. Vargha-Khadem later retested them with a much wider range of newly normed tests and the numbers changed. Do they have apraxia? Do they have low performance IQ? It depends on what tests you use and what your criteria are. All we can conclude is that if you use different tests at different times with different criteria you can get different results -- there needs no Bates, my friends, come from California to tell us this. But what does it all mean? First apraxia. Two phonologists, G. Piggott and H. Goad, spent many hours with the K family and even more hours listening to and transcribing tapes. They conclude that the errors that they make when they talk cannot be accounted for by oral motor problems (this is all reported on in detail in the McGill Working Papers). In fact, these subjects regularly produce forms that are harder to articulate than the normal form. One example: in producing the plural forms for novel words they regularly do not use voicing assimilation, i.e. they say /wug-s/ not /wug-z/. Fletcher early on suggested that their problems with tense might be accounted for by the articulatory vulnerability of regular tense marking in English. We tested this hypothesis and it is simply false. We have reported on hundreds of pieces of data about their use of tense in a wide range of different tests; spontaneous speech, reading, writing and they show the same pattern of problems with tense across all of these tests. Bates, herself, now seems to grant that the linguistic data cannot be accounted for by any purported oral motor problems. So even if one grants that some members of the K. family may have oral apraxia it tells us nothing about their language problems. Let's get beyond the K family for a minute. This problem with tense is not unique to them. It shows up everywhere that we have looked: Ability to produce tense marking % correct English (Eng.) English (Can.) French Japanese Greek impaired 38.3 52.3 46.7 48.1 20.0 controls 91.7 93.5 96.4 97.9 87.1 Subjects were given items like: Everyday I walk to school. Just like everyday, yesterday I ______ . This task requires the subject to recognize that the temporal context specified in the second sentence requires a particular verb form. Language impaired subjects outside of the K family have similar problems with tense no matter how it is encoded in their native language: in a final stressed syllable as in French or in a three syllable element like "mashita" in Japanese The IQ scores tell us nothing about their problems with language either. Even if we credit the new K. numbers over the original numbers all they show is that the means for the two groups are different. But still a language impaired member of the K. family scores 112, while his unimpaired relative scores only 84. It is a truism in speech pathology that some individuals with language disorder have lowish performance IQs and others have very high scores, as high as 140. (I have even recently been told about a language impaired subject who is a rocket scientist.) And there are lots of cases of individuals with low performance IQ who do not have these kinds of problems with language. Performance IQ scores simply do not correlate with these patterns of language impairment. It is absolutely clear, from our own research as well as from other research, that individuals with language impairment sometimes have other problems ranging from dyslexia to spatial rotation to depression to schizophrenia to apraxias and agnosias, but none of these other specific deficits reliably occurs with the language disorder and there are many individuals that have one of these other problems without having any language disorder. Language impaired let us grant them then, and now remains that we find out the cause of this effect, or rather say the cause of this defect. The question is whether the language impairment that we see in these individuals comes from a separate and special "language faculty" that is out of order or whether some more general cognitive or perceptual processing system is not functioning and the purported "language" problems are merely a result of a breakdown in a much more general system. Do these other problems simply co-occur with this disorder or are they the proximate cause of the language problems? What mechanisms could make disorders co-occur? One possibility is the contiguity of the genes that code for the co-occurring disorders. For example, the most striking feature of individuals with William's syndrome is that their spatial reasoning is seriously impaired. It is also the case that many of these individuals have heart and joint problems (as do many individuals who do not have spatial problems). But no one seriously supposes that their heart and joint problems cause their spatial problems. What appears to be going on is that the gene that is implicated in William's syndrome is very close to the genes that code for elastin which build normal hearts and joints. If the mutation is large it hits several genes at the same time. Though there is now a flood of epidemiological studies and twin studies that all indicate that at least some cases of SLI are connected to some genetic factors, it is still a puzzle about what the exact pattern of inheritance is and what genes are involved--lots of groups are working on these problems including our team lead by Roberta Palmour. There will be a loud hurrah! when this is figured out. But since we do not yet know what the genes are we cannot tell if the co-occurring problems are accountable for by genetic contiguity. The natural next question is what direct effect do these genes have on the organism? The obvious answer seems to be that something goes wrong in the development of the brain. Elena Plante and her colleagues have found significant differences in brain structure associated with this disorder. Alan Evans, Terry Peters and Noor Kabani, part of our team at the Montreal Neurological Institute, are studying MRI data from our Canadian subjects and, though the analysis is still ongoing, preliminary data shows that our impaired subjects also have significant neuroanatomical anomalies (in press, Journal of Neurolinguistics). Tanya Gallagher and Ken Watkin, also of McGill, have just completed an ultra-sound study comparing brain development in a fetus with a positive family history of SLI to three fetuses with no family history of language impairment. These studies show that there are significant differences between the two groups in the pattern of brain development in the last trimester of pregnancy and that these differences involve those centers of the brain that have been implicated for language (in press, Journal of Neurolinguistics). So at least for now it looks like there are reasons to believe that at least some cases of developmental language impairment are inherited and involve neurological anomalies. It seems likely that a possible source of the wide variety of other problems that we see in some particular individuals with language impairment is that the neurological damage caused by this mutation may affect different parts of the brain in different individuals. But this is just a hunch. Lots more about the nature of the genetic and neurological patterns that are associated with this language impairment still has to be understood. We are working on it. So are lots of others. The question that concerns me as a linguist is what precisely and exactly is wrong with their language and, more importantly, is something wrong with their ability to acquire language itself or are their language problems just an epiphenomenon of some other, non-linguistic problem. The only way to answer this question is to look at what they do and how they do it and why they do it and then come up with an explanatory theory that accounts for these facts. That is what we have been doing and the data converge to tell us that the language faculty is directly affected in these subjects. From bates at CRL.UCSD.EDU Tue Jan 28 02:24:15 1997 From: bates at CRL.UCSD.EDU (Liz Bates) Date: Mon, 27 Jan 1997 18:24:15 -0800 Subject: Message from Myrna Gopnik Message-ID: At some point last week, I composed and tried to send a response to Fritz Newmeyer¹s critique of Vargha-Khadem, but I never saw it on the net, and several friends have commented that they didn¹t see it either. On top of that, I have had private inquiries from a number of Funknetters asking if I intended to respond, so I am feeling compelled to give it one more try. Please note that this is a response to Fritz¹s message. I want to spend a little more time studying the longer account that he forwarded today from Myrna Gopnik before deciding whether I would have anything useful to contribute there, beyond what you will find below. Let me say first that I have been hesitant about this response, because I found Fritz¹s message to be nothing short of astonishing, a cry of outrage that borders on an ad hominem attack. I don't want to add to the level of vitriol from which we had begun to retreat. Let me see if I can add some substance to this exchange, without insulting anyone. The general thrust of Fritz¹s objections seems to revolve around the way that Vargha-Khadem et al. (henceforth V-K) conducted their research, which (he believes) led to the difference between their findings and those of Gopnik et al. Among other things, he insists that the differences between the affected and unaffected members of the K family that V-K observed were artifacts of the decision to test these people in a laboratory setting (with ³bright lights and cameras²). He is also particularly incensed about the use of IQ tests, implying that the very act of testing destroys the object of inquiry (a kind of linguistic Heisenberg principle). Starting with the IQ issue, I agree that the theoretical importance of IQ testing is a legitimate subject of debate. My personal view is that IQ tests put seventeen disparate skills together in a Waring Blender to yield a single number, so that most of the information that might be relevant is lost. However, because they have been in use for so many decades, IQ tests have become benchmark variables in neuropsychological research, i.e. background information that makes it easier to compare results across different laboratories and different populations. Indeed, it is difficult to get studies of impaired populations published in most of the major journals if you cannot provide this kind of background information. Whether or not we find them useful on scientific grounds, there is nothing particularly frightening about the content of IQ tests for anyone who has gone through a Western school system, or at least, no more frightening than any other situation in which a scientist or clinician is asking questions (which would presumably include any of Gopnik¹s tests as well). IQ tests were developed originally for use with army recruits, i.e. with working class individuals in a literate society. In this regard, let us remember that the subjects of this debate are members of a working class family in London. They are not members of an illiterate Cargo Cult, never before exposed to paper or pencil (much less lights and cameras). They have all spent years in the public school system, and have all been examined by doctors and nurses in the public health system (the British system still makes this possible, even for working class families....). Vargha-Khadem¹s laboratories are part of a research center allied with Great Ormond Street Hospital for Children. Having visited the laboratory myself, I can assure you that it is no more frightening that the pediatrician¹s offices and public schools that these people have seen for years. Indeed, it is likely that the affected members of the family have, because of their affliction, spent considerably more time seeing doctors and other professionals than their unaffected relatives. If anything, this fact should have reduced the difference between the affected and unaffected family members in the V-K study. And as for the video cameras: I am told that Gopnik first became aware of the K family when she saw them in a short documentary on the BBC while visiting friends in London. Hence we may presume that the K family was accustomed to cameras before Gopnik ever had a chance to test them. Assuming for the moment that the very fact of IQ testing has not served as a poison pill, destroying all the information that accompanies it, these findings were only one part of a much broader battery of tests reported by V-K, including 13 different tests of language. Most of these are indeed standardized instruments. Contrary to rumor, I do *NOT* believe that standardized tests are the only route to truth, but they do provide a broad overview of individual abilities, in a context that permits comparison with a lot of baseline data. They therefore serve as a good beginning for a more detailed inquiry, designed around specific linguistic issues, such as the assessment of the ability to comprehend and produce regular and irregular verbs. In this regard, let us ask again why it is the case that Gopnik (1990) report a dissociation between regulars and irregulars while V-K et al. find absolutely no evidence for such a dissociation. Instead, the affected members performed reliably worse than the unaffected members on both regulars and irregulars, with no trend, not even a hint of a double dissociation. Why indeed! As it turns out, Gopnik¹s original report in Nature was based on a very small number of items. The V-K report in the Proceedings of the National Academy used not a standardized test, but a large and representative battery of items designed specifically by Karolyn Patterson to assess the kind of linguistics and psycholinguistic issues that are central to the controversy about regulars and irregulars. It is likely, I think, that this difference in materials is responsible for the difference between the Gopnik and Vargha-Khadem findings. In fact, Ullman and Gopnik have a report in the Montreal Working Papers with results very similar to V-K¹s on regulars vs. irregulars when a broader battery of items is used (i.e. the dissociation seems to have gone away....). (Let me note here that Myrna refers in various communications to the Montreal Working Papers as a ³publication²; however, it is not peer-reviewed in the usual sense, not available in libraries as an archival work, and wouldn¹t pass as a publication by the standards that NIH applies to people who want grants from them; too bad, because we have a series called the Center for Research in Language Technical Reports that would greatly enhance our publication list if we were allowed to call it a ³publication²!). My point is that the V-K findings are based on many different instruments, tapping many different aspects of speech and language, in addition to the hated IQ tests. Those findings clearly indicate that this is NOT a disorder specific to any single aspect of language, and may not be specific to language at all, although language is certainly involved. In this regard, Fritz also complains about V-K¹s finding that the afflicted members of the K family have an articulation disorder, suggesting that this too may somehow be an artifact of the testing situation. Here I have to say that I have seen videotapes of the family (recorded in an informal context, by the way), and I have heard their speech with my own ears. There is absolutely no question that these people have a severe speech production disorder, the kind that you would expect if they were (as ³standardized² tests show) suffering from buccal-facial apraxia, i.e. difficulty with complex movements of the tongue and mouth. Several years ago Gopnik gave a presentation at the Wisconsin Child Language Disorders Conference where she played audiotapes of the family (presumably these are audiotapes that she recorded herself, in the context that Fritz recommends for linguistically meaningful research). I was not present, but heard from more than a dozen speech-language pathologists who were present in that audience that the evidence for a serious speech disorder was undeniable. And yet this was not mentioned in the 1990 Nature letter, in the 1991 Cognition paper, or in various summaries of the Gopnik results by Pinker, Gazzaniga and others. No one is claiming here that the articulation disorder *CAUSED* the grammatical deficits observed in the K family. After all, these people do poorly on a host of receptive tests, grammatical and lexical, and on a number of non-verbal tasks as well, so we cannot attribute all their symptoms to this single cause. Nor is anyone claiming that the IQ difference is the *CAUSE* of the grammatical problem. The point is, simply, that this is not a specific disorder. It is not specific to regular morphemes, it is not specific to grammatical morphology in general, it is not specific to grammar, it may not even be specific to language. Fritz complains still further that the V-K paper is very short, only 3.5 pages long. The Proceedings of the National Academy of Sciences, like Science and Nature, requires brief reports, without a great deal of methodological detail (recall that Gopnik¹s original letter to Nature, which started this controversey, was less than one page long). But the V-K results are clearly indicated in a summary table, with detailed statistics on each and every language and non-language measure. To be sure, it will be useful to learn more about their findings in subsequent papers (and such papers exist). However, brevity can be a virtue. The original report by Watson and Crick was not much longer than this, following normal practice in the journal Nature. The real questions are: Is it true, and if so, what does it mean? I am persuaded that the findings are true. This is a distinguished and respected research team, at a major research center, representing the fields of neuropsychology, neurology, and developmental psycholinguistics (e.g. Paul Fletcher, an eminent researcher in this field, certainly not naive about language and language development). They have used standardized tests that are recognized in this field, together with a new (non-standardized) measure specifically tailored (by anyone¹s standards) to reflect the relevant facts of regular vs. irregular morphology in English. What do the findings mean? Well, I agree that many many questions remain to be answered, but at the very least these findings mean that this is not a specific deficit. The search for the grammar gene must continue.... There are other approaches to the same problem, relevant to the issues of genetic substrates and language specificity. A number of different laboratories are investigating a syndrome called Specific Language Impairment, defined to include children who are 1.5 to 2 standard deviations or more below the mean for their age on expressive (and sometimes receptive) language abilities despite non-verbal IQs in the normal range (i.e. no mental retardation), in the absence of frank neurological abnormality (e.g. cerebral palsy, hemiplegia), severe social-emotional disorders (e.g. no autism), uncorrectable visual or auditory deficits (i.e. they are not blind or deaf). It has been known for more than two decades that this disorder ³runs in families², and Dorothy Bishop¹s recent work comparing monozygotic and dizygotic twins with SLI suggests that it has a strong heritable component. Does this syndrome provide evidence for the grammar gene? Despite all these exclusionary criteria, many different laboratories have demonstrated that children with SLI suffer from subtle deficits in processing that are not specific to language (e.g. aspects of attention and perception), although a few laboratories still insist that they have found a relatively pure form of the disorder (e.g. recent claims by Van der Lely). With regard to the ³intra-linguistic² nature of SLI, dozens of linguistic and psycholinguistic studies of these children lead to the conclusion that the deficit is best characterized as one of delay (i.e. they look like younger normal children) rather than deviance (i.e. no evidence for qualitatively different error types or sequences of development from those observed in normal children). A large number of studies also show that the deficit affects many different aspects of language. However, it has been known for a long time (since Judith Johnston¹s work 15 years ago with Schery and Kamhi) that grammatical morphology is the most vulnerable domain in SLI. Does the fact that morphology is MORE delayed than other aspects of language constitute evidence for a specific and genetically based grammatical disorder? Some investigators believe that is the case (e.g. Van der Lely; Gopnik; Rice; Clahsen). Others have argued instead that the grammatical deficits are secondary to the processing problems that these children display (e.g. Leonard; Bishop; Tallal). Data from our research center here in San Diego provide support for the latter view. First, my colleagues here (e.g. Wulfeck, Thal, Townsend, Trauner) are among those who have repeatedly found evidence for subtle non-verbal processing deficits and/or neuromotor defects in children who meet the above definition of SLI. Second, studies from my own laboratory have shown than grammatical morphology is the ³weak link in the processing chain² even for normal, neurologically intact college students. When these students are tested under adverse processing conditions (e.g. perceptually degraded stimuli, or compressed speech, or language processing with a cognitive overload), grammatical morphemes suffer disproportionately compared to every other aspect of the signal (e.g. content words, word order relationships). Taken together, these lines of evidence provide a reasonable case for the claim that the grammatical morphology deficits in SLI are secondary to (in this case, "caused by") a processing deficit that is not unique to language, although it has serious consequences for the ability to learn and process language on a normal timetable. This would help to explain why grammatical morphology is also an area of selective vulnerability in Down Syndrome, in the oral language of the congenitally deaf, in many different forms of aphasia (not just Broca¹s aphasia), and in a range of other populations as well. (If anyone is interested, I have a review paper on this topic). One might suggest that SOME forms of SLI have this non-specific base, but others really are due to an innate, language-specific malfunction -- perhaps the genetic mutation discussed in last week¹s Newsweek. One cannot rule this out without further evidence. It is interesting to note, however, a paper a few years ago by Tallal, Curtiss and colleagues separating their large sample of children with SLI into those with and without a family history of language disorders. There were no differences between the two subgroups in the nature of the language symptoms observed across a host of measures; however, the children with a family history were actually MORE likely (not less likely) to suffer from deficits in non-linguistic processing. Finally, I would refer you to a recent study by Wulfeck, Weissmer and Marchman, in a large SLI sample combining data from San Diego and Wisconsin, assessing the ability to generate regular and irregular past tense morphemes on a large and representative sample of items. Results clearly indicate that children with SLI are delayed in both aspects of grammatical morphology (i.e. there is no dissociation between regulars and irregulars), producing errors that are quite similar to those observed in younger normal children at various points in development (i.e. the usual story in research on SLI). I apologize for the length of this exercise, but I think it is important to get as many of the facts out as possible. Using terms like ³scandalous², Newmeyer has implied that there is something deeply wrong, perhaps fraudulent in the Vargha-Khadem et al. results. He has no basis whatsoever for a claim of that kind. Their research project is state-of-the-art in the field of neuropsychology. Newmeyer may want to respond by arguing that all of neuropsychology is irrelevant to this debate, that linguists are the only ones who know how to assess language properly, that standardized tests are useless (as opposed to merely insufficient), and that research conducted in a laboratory setting is invalid precisely because it was conducted in a laboratory setting. If he is right, we are in dire straits, because 98% of our knowledge about language disorders and the brain bases of language in normal and abnormal populations falls into this category. The recent exchange on Funknet leads me to the gloomy conclusion that some of our colleagues in the field of linguistics espouse exactly this belief. I would prefer to re-issue the argument that got me into this exchange: we need all the methods, all the constraints, all the information that we can possibly find to build a reasonable theory of the human language faculty. -liz bates From colinh at OWLNET.RICE.EDU Tue Jan 28 21:24:04 1997 From: colinh at OWLNET.RICE.EDU (Colin Harrison) Date: Tue, 28 Jan 1997 14:24:04 -0700 Subject: Brain Imaging & Linguistics Message-ID: I wonder how many of you have read Jaeger et al's piece in Language (72.451-97) about areas of the brain that are activated during past tense formation in English. I understand that it's been quite a hit in some parts of the linguistic world, and it's certainly to be praised for its methodological rigor and the honesty of the authors. I am convinced that this sort of experimentation ought to represent a significant direction for future research (some of which I hope to be doing soon myself). The thing is, the experimenters interpret their results as showing that regular inflections are processed differently than irregular inflections, but I don't see that their theoretical conclusions follow from their data due to at least two major confounds. I wanted to put these ideas out to the rest of you Funknetters and see what y'all think. First up, semantic discrepancies between Jaeger et al.'s word lists represent a significant confound. The two lists of interest are the cue sets from which subjects had to form regular and irregular past tenses (sets 3 and 4). Jaeger et al. note that overall, the irregular past forms require more cortical activation than the regulars, and conclude that this is because they are not associated with an on-line rule system, and hence require more attention and greater resource devotion (p.487). But if you look at the meanings involved, a rather different explanation seems at least plausible. Each list comprises 46 tokens. Of these, the regular past list has just nineteen (41%) that are unambiguously human physical activities, involving limb movement. The irregular list shows a much higher proportion of human physical activities, 33 of the 46 tokens (73%). This looks like a significant difference to me! Might not the greater cortical activation noted in the irregular condition be a result of more widespread somatic activation as an intrinsic part of the meaning of the verbs, rather than anything to do with their morphosyntactic regularity? There is ample evidence emerging from imaging studies (follow up for instance the work of Hanna and Antonio Damasio), that the comprehension of words that are connected with any kind of somatic experience involves activation in some of the same areas as the instantiation of the experience itself. So, the meaning of a verb such as "walk" will involve indirect activation of the somato-sensory circuits necessary to walk, plus all those more peripherally involved in the experience of the activity etc. Jaeger et al's results look as if they represent disconnected activation patterns, but their results were not so neat and clean at first: they needed to "wash" a fair amount of "random" noise from their charts until they arrived at something resembling the neat, discrete pictures they presented. They are completely open about the normalising proceedures they follow, and it's all there in black and white for anyone who wants to examine it more closely than I have. My concern is, it's not unlikely that they could have "washed" out the evidence of simmilar somatic activation from the regular list, but the somatic activation in the irregular list would have been too large to remove in this way, leaving behind different activation patterns based not on algorithmic versus non-algorithmic processing (Jaeger et al's conclusion), but rather based on the semantic category of the verbs in each list. Secondly, even if we dismiss the first objection, the experimental design itself assumes the conclusion. That is, subjects in the test conditions were performing an algorithmic task at the behest of the examiner: "given x (a verb stem form), produce y (the past tense of the same verb)." It is not clear to me that information about brain activation during a predictable (and probably pretty boring) two-minute algorithmic task has any relevance to brain activation during production of similar forms when one is engaged in meaningful speech. In order to equate these two types of processing, you have to begin with the assumption that speakers inflect verbs according to an algorithmic procedure during on-line discourse production - exactly the kind of process whose centrality to natural language production is disputed! What do you think? Colin Harrison Dept. of Linguistics Rice University Houston TX 77030 USA From lmenn at CLIPR.COLORADO.EDU Wed Jan 29 16:09:18 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Wed, 29 Jan 1997 09:09:18 -0700 Subject: Brain Imaging & Linguistics In-Reply-To: <199701282026.OAA05117@owlnet.rice.edu> Message-ID: It's a good point about the semantics. There's also the task variables I noted on the Info-childes network; if you want to see waht I said there, let me know. Lise Menn On Tue, 28 Jan 1997, Colin Harrison wrote: > I wonder how many of you have read Jaeger et al's piece in Language > (72.451-97) about areas of the brain that are activated during past tense > formation in English. I understand that it's been quite a hit in some > parts of the linguistic world, and it's certainly to be praised for its > methodological rigor and the honesty of the authors. I am convinced that > this sort of experimentation ought to represent a significant direction for > future research (some of which I hope to be doing soon myself). The thing > is, the experimenters interpret their results as showing that regular > inflections are processed differently than irregular inflections, but I > don't see that their theoretical conclusions follow from their data due to > at least two major confounds. I wanted to put these ideas out to the rest > of you Funknetters and see what y'all think. > First up, semantic discrepancies between Jaeger et al.'s word lists > represent a significant confound. The two lists of interest are the cue > sets from which subjects had to form regular and irregular past tenses > (sets 3 and 4). Jaeger et al. note that overall, the irregular past forms > require more cortical activation than the regulars, and conclude that this > is because they are not associated with an on-line rule system, and hence > require more attention and greater resource devotion (p.487). But if you > look at the meanings involved, a rather different explanation seems at > least plausible. > Each list comprises 46 tokens. Of these, the regular past list has > just nineteen (41%) that are unambiguously human physical activities, > involving limb movement. The irregular list shows a much higher proportion > of human physical activities, 33 of the 46 tokens (73%). This looks like a > significant difference to me! Might not the greater cortical activation > noted in the irregular condition be a result of more widespread somatic > activation as an intrinsic part of the meaning of the verbs, rather than > anything to do with their morphosyntactic regularity? There is ample > evidence emerging from imaging studies (follow up for instance the work of > Hanna and Antonio Damasio), that the comprehension of words that are > connected with any kind of somatic experience involves activation in some > of the same areas as the instantiation of the experience itself. So, the > meaning of a verb such as "walk" will involve indirect activation of the > somato-sensory circuits necessary to walk, plus all those more peripherally > involved in the experience of the activity etc. > Jaeger et al's results look as if they represent disconnected > activation patterns, but their results were not so neat and clean at first: > they needed to "wash" a fair amount of "random" noise from their charts > until they arrived at something resembling the neat, discrete pictures they > presented. They are completely open about the normalising proceedures they > follow, and it's all there in black and white for anyone who wants to > examine it more closely than I have. My concern is, it's not unlikely that > they could have "washed" out the evidence of simmilar somatic activation > from the regular list, but the somatic activation in the irregular list > would have been too large to remove in this way, leaving behind different > activation patterns based not on algorithmic versus non-algorithmic > processing (Jaeger et al's conclusion), but rather based on the semantic > category of the verbs in each list. > Secondly, even if we dismiss the first objection, the experimental > design itself assumes the conclusion. That is, subjects in the test > conditions were performing an algorithmic task at the behest of the > examiner: "given x (a verb stem form), produce y (the past tense of the > same verb)." It is not clear to me that information about brain activation > during a predictable (and probably pretty boring) two-minute algorithmic > task has any relevance to brain activation during production of similar > forms when one is engaged in meaningful speech. In order to equate these > two types of processing, you have to begin with the assumption that > speakers inflect verbs according to an algorithmic procedure during on-line > discourse production - exactly the kind of process whose centrality to > natural language production is disputed! > > What do you think? > > > Colin Harrison > Dept. of Linguistics > Rice University > Houston TX 77030 > USA > From colinh at OWLNET.RICE.EDU Wed Jan 29 18:37:39 1997 From: colinh at OWLNET.RICE.EDU (Colin Harrison) Date: Wed, 29 Jan 1997 11:37:39 -0700 Subject: Brain Imaging & Linguistics - addendum... Message-ID: Hi Funknetters! It's been pointed out to me that my use of the term "somatic/somato-" in my recent posting may be a little odd. For "somat-" read "motor". That should align me more cogently with accepted terminology! Thanks! Colin Harrison Dept. of Linguistics Rice University Houston TX USA From lamb at OWLNET.RICE.EDU Fri Jan 31 21:30:46 1997 From: lamb at OWLNET.RICE.EDU (Sydney M Lamb) Date: Fri, 31 Jan 1997 15:30:46 -0600 Subject: Neurologists and connectionism Message-ID: Tom --- In belated reaction to your entertaining "Get real, George" message of the 6th, i want to touch on an incidental point at least before we leave January forever. You wrote: > > in search of the latest fad. When I ask the real neurologists > I know what they think of connectionism, I get an incomprehension response. > Never heard of it. Con what? > Your statement has two implications: (1) that neurologists haven't heard of connectionism, (2) that neurologists have an expert knowledge of how the brain works that would enable them to pass judgement on the merits of connectionism. From my viewing platform it appears clear that both of these implications are false. 1. At least some neurologists, in fact some very highly regarded ones, have indeed heard of connectionism (1987 Rumelhart and McClelland variety). Two examples: It is described in Kandel, Schwarz, and Jessel, Principles of Neural Science (1991:836-7), which neurologists in medical schools recommend to medical students as "the bible" on neural structures and their operation; also in their "Essentials of Neural Science and Behavior" (1995). Likewise, Dr. Harold Goodglass, a very highly regarded neurologist and aphasiologist, mentions this model in his "Understanding Aphasia" (1993:37-8). 2. The above is not surprising. What is surprising is that this model, despite its being egregiously out of accord with known facts of neural structures and their operation, is mentioned favorably, as smthg to be taken seriously, by the above writers and others (esp. Kandel et al --- Goodglass gives it fainter praise; possibly, one hopes, he is just being diplomatic). Such favorable description by Kandel et al. and others is what demonstrates the falsity of your second implication. If the neurologists can so easily give credence to such an unrealistic model, they are not experts in this area after all. Yes, they are experts in other areas --- they know a lot about anatomy, synapses, neurotransmitters, ALS, how to diagnose neurological problems, etc. etc., but they have demonstrated that they don't have much of a clue about how information is learned, remembered, and used by real neural networks. Warmest regards, Syd . From annes at HTDC.ORG Thu Jan 2 21:13:54 1997 From: annes at HTDC.ORG (Anne Sing) Date: Thu, 2 Jan 1997 11:13:54 -1000 Subject: PARSER COMPARISON Message-ID: Before going into a detailed discussion of the points made in Daniel Sleators last message, I would like to point out that the one point he has not made is that his parser cannot conform to the majority of all three areas of our challenge except through assertion. Taking a look at his web site you will find a parser that will provide a complex tree that can only be read by those who have studied his Link theory of grammar. There is a minor bit of labelling of parts of speech, but there is no indication, he can accurately label subjects verbs, objects and complements of any sort. There is also no indication that he can return multiple strings of ambiguous sentences ("John saw a man in the park" or "the Chinese Insructor.") Most importantly, there is absolutely no indication that he can do anything with the trees that he provides; e.g. can he change actives to passives, statements to questions or answer questions? The answer is probably no but we simply do not know. A parser that merely produces a set of obscure trees is of value to no one. This is precisely the reason that Derek and I have issued this challenge. If a parser is parsing it should be able to meet the minimum requirements of our challenge in a way that ANYONE can see, not just those who are the insiders of a particular theory or those with a strong background in syntax. Parsing that merely results in a labelled bracketing or a tree structure is nothing more than an intellectual exercise. DO SOMETHING WITH IT THAT ALL CAN JUDGE OR KEEP IT OUT OF THE RUNNING UNTIL YOU CAN is the motto we bring to this challenge. Particularly, put something on the web that indicates that you can do something of value such as ask and answer questions or correct grammar. These functions absolutely require that you can accurately handle all the items listed in our challenge or you can't do them at all. Certainly, in the private sector, individuals who have to make decisions concerning parsers are not going to be linguists. Also, you cannot ask them to get a degree in linguistics before they are qualified to make such decisions. If that is a requirement companies are simply not going to participate. HERE IS A MORE DETAILED RESPONSE TO DANIEL SLEATOR'S MESSAGE At 11:34 AM 1/1/97 -1000, Daniel Sleator wrote: He begins with a review of our challenge. >Philip Bralich suggests that those of us working in the area pf parsing >should make our systems available via the web. Davy Temperley and I are >in full agreement with this. That's why a demonstration of our link >grammar system has been up on the web for over a year. Go to >"www.cs.cmu.edu/~sleator" and click on "link grammar" to get to the >parser page. > >Philip has also proposed a set of criteria by which parsing systems can >be judged: > >> In addition to using a dictionary that is at least 25,000 words in >> size and working in real time and handling sentences up to 12 or 14 >> words in length (the size required for most commercial applications), >> we suggest that parsers should also meet the following standards >> before engaging this challenge: >> >> At a minimum, from the point of view of the STRUCTURAL ANALYSIS OF >> STRINGS, the parser should:, 1) identify parts of speech, 2) identify >> parts of sentence, 3) identify internal clauses, 4) identify sentence >> type (without using punctuation), and 5) identify tense and voice in >> main and internal clauses. >> >> At a minimum from the point of view of EVALUATION OF STRINGS, the >> parser should: 1) recognize acceptable strings, 2) reject unacceptable >> strings, 3) give the number of correct parses identified, 4) identify >> what sort of items succeeded (e.g. sentences, noun phrases, adjective >> phrases, etc), 5) give the number of unacceptable parses that were >> tried, and 6) give the exact time of the parse in seconds. >> >> At a minimum, from the point of view of MANIPULATION OF STRINGS, the >> parser should: 1) change questions to statements and statements to >> questions, 2) change actives to passives in statements and questions >> and change passives to actives in statements and questions, and 3) >> change tense in statements and questions. > >Whether or not anybody else agrees that these are the right desiderata, >it's useful that he's put them forward. We can use them to evaluate >our own work, and Bralich's work as well. We have done this, and >it seems to us that our system is superior to Bralich's. This is only an assertion until the functions are written that show this in a manner an outisder can judge. If you can do it at all then you know that each function can be written in one day. They should take the part of speech and part of sentence info and all the other info and arrange it in a way that is easy to read for all: For example: The man who mary likes is reading a book "The" is an article "man" is a noun "The man who mary likes is" is the subject of "is reading" "who" is the direct object of the verb "likes" This is a statement It is simple present active And so on Expecting the user to learn your theory before he can see these things is just asking too much. >The version of link grammar that we have put up on the web already does >very well in a number of these criteria. But not many. And not in a way that others can see or judge. > Regarding STRUCTURAL ANALYSIS, >the parser outputs a representation of a sentence which contains much of >the information discussed by Bralich. Parts of speech are shown >explicitly; things like constituent structure are virtually explicit >(for example, a subject phrase is anything that is on the left end of an >"S" link). Tense and aspect are not explicit in the output, but they >could quite easily be recovered. Then lets see it. Which is really all we are asking in this challenge. Regarding EVAULATION OF STRINGS, our >system is far superior to the Ergo parser. Our system does an excellent >job of distinguishing acceptable from unacceptable sentences. It is very difficult to see this. Certainly, many of the bad sentences we typed in looked like they parsed. More suspiciously, I typed in a number of ambiguous sentences and it returned only parse each. >Furthermore, it is often able to obtain useful structural information >from non-grammatical sentences, by making use of "null-links". I have no idea what this means as do many readers I suspect. >Below we >discuss some basic problems with the Ergo parser regarding its >evaluation and analysis of sentences. We have not implemented a >MANIPULATION OF STRINGS component. We have worked out a sentence >constructing mechanism that we believe would be able to handle this as >well. Of course we'll have to do the work to make this convincing. We >may be inspired to add this feature as a result of these discussions. This again is not at all clear. Make a function where we can see this. Until you show your ability to manipulate strings this is just an assertion. Also your ability to recognize a question/statement/command as well as tense and voice is not shown. Further whether a sentence is simple compound or complex is not mentioned. >Bralich's aim is to build a parser that will be useful for interactive >games and other applications. It is therefore restricted to short >sentences, and has a fairly small vocabulary. Our vocabulary of over 50,000 words is double his stated vocabulary of 25,000. Further, the smaller the dictionary the less problems there will be with ambiguity. We take the risk with the higher vocabulary and have no serious problems. We also handle large sentences like anyone else; however, on sentences with more than 14 or 15 words we cannot yet meet the standards of our challenge which is much more comprehensive than what is required by Mr. Sleator. We will be able to do this in two or three months for now we limit ourselves to sentence lengths that are amenable to easier applications. We can now do as much as Mr. Sleator with large sentences but that does not satisfy the challenge. When we can do all of our challenge they will also be on our web site. (This could be as early as two months from now). As it stands if the link parser were asked to do interactive games it is unclear whether he could do much at all as we do not know that he can make a question from a statement let alone answer one. Further until he clearly indicates his parsers ability to find subjects and objects and so on there is no way we can expect his parser to properly analyze questions or return appropriate responses. However, even with these >constraints, there are a number of very basic constructions that his >parser cannot handle. Here are some examples. All of the sentences below >are simply rejected by his parser. > I went out The parser does not allow two-word verbs > He came in like "set up", "go out", "put in", which are > He sent it off extremely common. > I set it up He did find some verbs with some problems, but in general we handle these. > He did it quickly The parser seems to have extremely limited > use of adverbs. (It does accept some > constructions of this type, like "He ran > quickly", so perhaps this is a bug.) We have not yet allowed such adverbs to attach sentence finally. It is a small task, but we have not yet done it. I will see if the programmer has time for this tomorrow. One extremely important plus for our parser is that it is easy to trouble shoot. All of the problems mentioned here will be fixed before the month is out. Many much sooner. > John and Fred are here The parser does not know that conjoined > singular noun phrases take plural verbs. This should also be fixed by tomorrow. I am sorry that some of what is happening here may seem like our beta testing. That is, comments such as these help us find and correct problems. Again we are not promising that we can do everything, merely that ours is the best and that it can be judged in an open forum. > The dog jumped and the The parser does not seem to > cat ran accept ANY sentences in which clauses > are joined with conjunctions. We only recently began adding coordinate structures. Complex phrases will be done soon, but the coordination of verb phrases (john read a book wrote a paper and took a test) is about six weeks away. > He said he was coming The parser accepts "He said THAT he was > coming"; but it does not allow deletion of > "THAT", which is extremely common with some > verbs We will add this. I was unaware that our subcategorization frame for "say" did not allow that. > I made him angry There are a number of kinds of verb > I saw him leave verb complements which the parser does > I suggested he go not handle: direct object + adjective > ("I made him angry"), direct object + > infinitive ("I saw him leave"), > subjunctive ("I suggested [that] he go"). Again we will add these and easily 90% of the errors that we encounter during this public announcement in very short time. > His attempt to do it The parser cannot handle nouns that take > was a failure infinitives. > I went to the store The parser cannot handle the extremely > to get some milk common use of infinitive phrases meaning > "In order to". I will add it. > >There are also cases where the parser assigns the wrong interpretation >to sentences. One of the biggest problems here is in the treatment of >verbs. Verbs in English take many different kinds of complements: direct >objects, infinitives, clauses, indirect questions, adjectives, object + >clause, and so on. The Ergo Parser seems to treat all of these >complements as direct objects, and makes no distinctions between which >verbs take which kind. This means, in the first place, that it will >accept all kinds of strange sentences like "I chased that he came", >blithely labeling the embedded clause as an object of "chased". More >seriously, this often causes it to assign the wrong interpretation to >sentences. For example, This is not true. Try it. > I left when he came >The verb "left" can be either transitive or intransitive. Here, it is >clearly being used intransitively, with "when he came" acting as a >subordinate clause. But the Ergo Parser treats "when he came" as a >direct object. Yes, again something we will add. As you will note, though we parse complex and compound sentences, we do not label parts of speech in those sentences. We will soon. Of course, you should note that we have no idea whether or not the link parser can label parts of the sentence at all. >The program does not seem to analyze relative clauses at all. In >the sentence > The dog I saw was black How can you say we do not analyze relative clauses when we get them exactly correct. You can say the dog I saw the dog that I saw the dog which I saw and recieve a correct and complete analysis. >the parser states that "I" is the subject of "saw", and that "The dog I >saw" is the subject of "was", which is exactly correct. >but does not state that "dog" is the >object of "saw". The program also accepts We could but we decided not to because we thought it would be confusing for the user. In a sentence like "the dog which I saw" or "the dog which I saw" we would label "THAT" or "WHICH" as the object of "saw" which is more accurate. We could also label "dog" as the object which has some sense to it as well, but we decided against it. You might try the more difficult "the man who mary likes is reading a book" or an example of your own. I know in the past we have had trouble with the sub- categorization frame for "saw" and I am not sure if we have fixed it or not. "The dog I died was black" >(analyzing it in the same way), further indicating that it simply has no >understanding of relative clauses. >In the sentence "How big is it", the program analyzes "how big" as the >subject of the sentence. Yes, "How" questions are still on the programmers desk. They will be there in about a week. >We were able to identify all these problems with the Ergo parser without >knowing anything about how it works -- the formalism used is >proprietary. A plethora of new problems would probably emerge if we >knew how it worked. And all of these problems will probably be >exacerbated with longer sentences. On the contrary, the fact that the link parser just gives an obscure tree rather than a user-friendly output, it is impossible to know if his parser does much of anything at all. >All of these problems with the Ergo Parser - constructions that it does >not accept, and things that it mis-analyzes - are things that our system >handles well. Indeed, the _original_ 1991 version of our parser could >handle all these things. In our version 2.0, released in 1995, we >incorporate many constructions which are less common. We should point >out that even the latest version of our parser is far from perfect. It >finds complete, correct parses for about 80% of Wall Street Journal >sentences. This is merely assertion. You simply must prerpare the functions that will show this to the user and put that on the web. Especially, you need to show that your parser can handle the manipulation of strings rather than just a bracketing. A bracketing is meaningless unless you can use it to do something with the langauge you are analyzing. No one really cares about statistical analyses of words or sentences what people need and want are applications that will allow them to use real language in real time with computer applications. There is nothing at your site that indicates your ability to do this. >The reader can try both systems for himself or herself, and come to >his/her own conclusions. (The Ergo parser is at www.ergo-ling.com, ours >is at www.cs.cmu.edu/~sleator.) Yes, please. > Daniel Sleator > Davy Temperley In sum, the purpose of our challenge is to allow the academic community and private sector an opportunity to see and judge for themselves what is possible in the area of the analysis of grammar. We proposed a set of minimum standards that are necessary to show that a parser is what we call "commercially viable." Until the link parser demonstrates its ability to meet these challenges in a way that anyone can see, we simply do not know that it is "commercially viable." Further, we did not claim that our parser was perfect. Just the best. And that we are willing to put it to the test. An imporant aspect of our parser is that it is easy to trouble shoot. Early next week we will go through all the sentences that were input during this challenge and then address the problems. This will take a few days to a couple of weeks depending on the problem. Please try these parsers and then try them again in a couple of weeks. I am sure you will agree that we have redifined what parsing is and can be. I also suspect that the link parser will not be able to meet the challenge any more in two weeks than it is now; that is, with a user friendly web site rather than assertion and obfuscation. Phil Bralich Philip A. Bralich, Ph.D. Presidend and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel:(808)539-3920 Fax:(808)539-3924 Sincerely, Anne Sing ERGO LINGUISTIC TECHNOLOGIES Manoa Innovation Center 2800 Woodlawn Drive, Suite 175 Honolulu, Hawaii 96822 TEL: (808) 539-3920 FAX: (808) 539-3924 From annes at HTDC.ORG Thu Jan 2 21:24:03 1997 From: annes at HTDC.ORG (Anne Sing) Date: Thu, 2 Jan 1997 11:24:03 -1000 Subject: Rapid Parser Repairs Message-ID: I didn't realize it but our head programmer was here last night (the holiday) and I fixed all but one or two of the sentences that Mr. Sleator said didn't work. Part of the problem was that our verb section of our dictionary on the web was corrupted. The important point being that is easy for us to update and repair problems with our parser. Something that most others cannot handle. That is in most cases, even minor repairs take months Phil Bralich Philip A. Bralich, Ph.D. President and CEO ERGO LINGUISTIC TECHNOLOGIES Manoa Innovation Center 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 TEL: 808-539-3920 FAX: 808-539-3924 From kuzar at RESEARCH.HAIFA.AC.IL Fri Jan 3 06:38:18 1997 From: kuzar at RESEARCH.HAIFA.AC.IL (Ron Kuzar) Date: Fri, 3 Jan 1997 08:38:18 +0200 Subject: Tone of Discussion Message-ID: Sorry to interfere, but the arrogant tone of discussion as initiated by Anne Sing is very annoying. I, for one, am very interested in trees (and other structural descriptions) and do not care at all if these trees can be afterwards utilized for commercial products. I do think that intellectual challenges may be launched and may benefit the thinking community, so could you please calm down and spare us the show. Roni --------------------------------------------------------------- | Dr. Ron Kuzar | | Office address: Department of English Language and Literature | | Haifa University | | IL-31905, Haifa, Israel | | Office fax: +972-4-824-0128 (attention: Dept. of English) | | Home address: 17/6 Harakefet St. | | IL-96505, Jerusalem, Israel | | Telephone: +972-2-641-4780 (Local 02-641-4780) | | E-mail: kuzar at research.haifa.ac.il | --------------------------------------------------------------- From bralich at HAWAII.EDU Fri Jan 3 18:07:41 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Fri, 3 Jan 1997 08:07:41 -1000 Subject: Tone of Discussion Message-ID: At 08:38 PM 1/2/97 -1000, Ron Kuzar wrote: >Sorry to interfere, but the arrogant tone of discussion as initiated by >Anne Sing is very annoying. I, for one, am very interested in trees (and >other structural descriptions) and do not care at all if these trees can >be afterwards utilized for commercial products. I do think that >intellectual challenges may be launched and may benefit the thinking >community, so could you please calm down and spare us the show. These issues are more important than you letter seems to indicate. Parsing technology is an important area for the future of computational linguistics. A proper understanding of syntax is crucial to this endeavor. However, the creation of tress alone is not enough. The trees are meant to illustrate generalizations about language based on a particular theory of syntax. If the trees cannot be used to manipulate sentences or label parts of the sentence such as subjects and verbs, then the theory is not adequate. The real test is not the creation of trees. The test is if these trees allow you to analyze and manipulate language in a significant way thereby demonstrating the efficacy of the theory. As for the tone, I honestly don't see it to be much different than about 90% of what occurs on this or other lists. Certainly the tone of your message seems problematic. It is, in many cases, necessary to make a point in a world that is increasingly dominated by this sort of rhetoric. My participation in it is reluctant. However, if I did not my arguments would be lost. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From PDeane at DATAWARE.COM Fri Jan 3 18:11:00 1997 From: PDeane at DATAWARE.COM (Paul Deane) Date: Fri, 3 Jan 1997 13:11:00 -0500 Subject: FW: Parser Challenge Message-ID: After reading the recent postings on FUNKNET about the parser challenge, I went to the Ergo parser site and tried it out. I was particularly interested since I have worked with the Link Grammar parser extensively, and other parsers, and so I have a pretty good idea what the state of the art looks like. The functionality built into the Ergo interface is very nice: certainly it is an advantage, for the purposes of evaluating parsers, being able to get the grammatical analysis outputted directed in a simple and easily understood format. And such functionalities as getting transformational variants of sentences (especially question-answer pairs) is of obvious commercial benefit. (Though there are certainly other sites with such functionality. Usually, though, that is something built for a particular application on top of a parser engine, rather than being built into the parser. It would be nice as a standard parser feature though.) Leaving that aside, I found the performance of the Ergo parser substantially below state of the art in the most important criterion: being able to parse sentences reliably - at least, judging by the web demo (though there are some risks in doing so, of course, since it is always possible that performance problems are the result of incidental bugs rather than the fundamental engine or its associated database.) Quite frankly, though, the self-imposed limitation of 12-14 words concerned me right off the bat, since most of the nastiest problems with parsers compound exponentially with sentence length. But I decided to try it out within those limitations. As a practical test, I took one of the emails sent out from Ergo, and tried variants of the sentences in it. By doing this, I avoided the trap of trying simple garden-variety "example sentences" (which just about any parser can handle) in favor of the variety of constructions you can actually get in natural language text. But I reworded it slightly where necessary to eliminate fragments and colloquialisms and to get it into the 12-14 word length limit. That meant in most cases I had to try a couple of variants involving parts of sentences, since most of the sentences in the email were over the 12-14 word limit. Here were the results: I didn't realize it but our head programmer was here last night. -- did not parse I fixed the sentences that Mr. Sleator said didn't work. -- failed to return a result at all within a reasonable time; I turned it off and tried another sentence after about ten minutes. Our verb section of our dictionary on the web was corrupted. - parsed in a reasonable time. Part of the problem was that our dictionary was corrupted. - took 74.7 seconds to parse It is easy for us to update and repair problems with our parser. -again, it failed to return a result in a reasonable time. This is something that most others cannot handle. -did not parse. Even minor repairs take months. -again, it failed to return a result in a reasonable time. I am not particularly surprised by these results. Actual normal use of language has thousands of particular constructions that have to be explicitly accounted for in the lexicon, so even if the parser engine Ergo uses is fine, the database could easily be missing a lot of the constructions necessary to handle unrestricted input robustly. Even the best parsers I have seen need significant work on minor constructions; but these sentences ought to parse. They are perfectly ordinary English text (and in fact all but one parses in a less than a second on the parser I am currently using). No doubt the particular problems causing trouble with these sentences can be fixed quickly (any parser which properly separates parse engine from rule base should be easy to modify quickly) but the percentage of sentences that parsed suggests that there's a fair bit of work left to be done here. From bralich at HAWAII.EDU Fri Jan 3 20:32:09 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Fri, 3 Jan 1997 10:32:09 -1000 Subject: FW: Parser Challenge Message-ID: At 08:11 AM 1/3/97 -1000, Paul Deane wrote: >After reading the recent postings on FUNKNET about the parser challenge, >I went to the Ergo parser site and tried it out. I was particularly >interested since I have worked with the Link Grammar parser extensively, >and other parsers, and so I have a pretty good idea what the state of >the art looks like. > >The functionality built into the Ergo interface is very nice: certainly >it is an advantage, for the purposes of evaluating parsers, being able >to get the grammatical analysis outputted directed in a simple and >easily understood format. And such functionalities as getting >transformational variants of sentences (especially question-answer >pairs) is of obvious commercial benefit. (Though there are certainly >other sites with such functionality. Usually, though, that is something >built for a particular application on top of a parser engine, rather >than being built into the parser. It would be nice as a standard parser >feature though.) THis is the main point of our challenge. We chose these criteria because they demonstrate to anyone the ability to do the basic tasks that underly any real-world parsing job: name part of speech, part of sentence, tense, sentence type, internal clauses and so on. Merely claiming these abilities or making them visible only to those who know the theory is not enough really. >Leaving that aside, I found the performance of the Ergo parser >As a practical test, I took one of the emails sent out from Ergo, and >tried variants of the sentences in it. By doing this, I avoided the trap >of trying simple garden-variety "example sentences" (which just about >any parser can handle) in favor of the variety of constructions you can >actually get in natural language text. But I reworded it slightly where >necessary to eliminate fragments and colloquialisms and to get it into >the 12-14 word length limit. That meant in most cases I had to try a >couple of variants involving parts of sentences, since most of the >sentences in the email were over the 12-14 word limit. This is a somewhat odd set of sentences to begin with though not completely unfair. We are suggesting that the problem in parsing is that most people are not handling anything properly. That is most cannot handle the analysis of small or medium sentences properly. So while the sentences you put in may be at our current upward length (partially because our dictionary is only 60,000 words in size). Still we have no idea that any other parser can do a full parse of small and medium sentences. The point of the challenge is to establish very tough criteria and then work with it from smaller to medium to larger sentences. The sentences input in this test will be working in just a few weeks, but no other parser meets our challenge for small or medium size sentences. We need to look at all parsers for all these criteria from small to large. By the way, our current development will allow us to take large steps forward every two months for the next year. After that we should level out. The main points being this: 1. All parsers should be held to the task of labelling parts of speech, parts of the sentence, sentence type, and tense and voice as well as being able to manipulate strings: change actives to passsives and statements to questions and so on. This after all is what parsing is. Creating trees is a preliminary step toward formulating these generalizations about the syntax of the language you are analyzing. 2. These criteria should be held for small medium and large sentences. 3. As our parser improves we will hold to these criteria for all size sentences. 4. As it is only our parser can do all this for sentences of ANY size. The claims of other parsers are merely assertions until they provide these functions on a web site that all can see. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From cumming at HUMANITAS.UCSB.EDU Fri Jan 3 21:13:45 1997 From: cumming at HUMANITAS.UCSB.EDU (Susanna Cumming) Date: Fri, 3 Jan 1997 13:13:45 -0800 Subject: Parsing Message-ID: Folks, I'm somewhat surprised that this parsing discussion is taking place on Funknet. Analysis of isolated "sentences" into labeled trees without reference to contextual factors is something that human beings never do (in fact we never even encounter such objects), so why should we be interested as functional linguists in whether a computer can do it? Paul Deane's point that the range of constructions encountered in everyday language use is still much wider than has yet been accomodated in any actual grammatical description, on or off-line, is well-taken. Moreover, he takes his examples from written language; far greater yet is the range encountered in everyday informal speech -- the only universal form of language use. This is the real test if we are interested in language processing as scientists rather than as software engineers. While I for one am certainly in favor of in computer modelling of natural language production and understanding as a tool for testing our hypotheses about the way contextual factors and linguistic form interact, for me such attempts are only interesting to the extent that they reflect actual human behavior in natural communication situations. This is not of course to denigrate the commercial potential of sentence-level parsing, a matter the marketplace can decide. Susanna Cumming From dryer at ACSU.BUFFALO.EDU Sat Jan 4 09:21:50 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Sat, 4 Jan 1997 04:21:50 -0500 Subject: parsing Message-ID: I second Susanna Cumming's surprise that this parsing discussion is taking place on Funknet, as well as her other comments, but want to add a few additional comments. Quite apart from the major issue of context, there are a number of other ways in which the discussion seems a number of steps removed from what people do when they parse sentences. The notion that the output of a parse is a syntactic tree is odd. The "real" output is some sort of meaning. It may very well be that parsing the sentence syntactically plays a major role in allowing people to determine the meaning, but that doesn't mean that an entire tree is produced in the process. Many parsers for computer programming languages parse computer programs syntactically, in that they identify the syntactic structure, but this is only because of the extent to which certain aspects of meaning are associated with syntactic structure, and identifying these aspects of syntactic structure are crucial in determining the intended meaning. But even when they do, they do not construct syntactic trees for the program; they construct a representation of the meaning. Parsing is only of interest if the output is some sort of meaning. I thus take issue with the claim >>All parsers should be held to the task of labelling parts of >>speech, parts of the sentence, sentence type, and tense and >>voice as well as being able to manipulate strings: change >>actives to passsives and statements to questions and so on. >>This after all is what parsing is. This has little to do with what parsing is, if by parsing we are referring to something that people do. Speakers of any language can parse sentences in their language without being able to label parts of speech or other grammatical features. Nor do they need to be able to change actives into passives. What they need to be able to do is parse active and passive sentences and come up with the same denotative meaning. Being able to change actives to passives is not necessary for this. Nor is this just a terminological issue about what "parsing" is. If part of the test of a syntactic theory is its ability to parse sentences, then part of the test of a syntactic theory is how successful it is to assigning meanings in context, either as part of the theory itself, or in terms of its interaction with other systems. Thus the idea that parsing simply involves producing trees is reminiscent of the sort of modular view of syntax that most functionalists reject. When someone tells me about a web site at which one can have conversations with a computer program attached to a knowledge database, even if the area of knowledge is very limited, and the vocabulary very limited, and only simple syntactic structures permitted, then I'll be interested. Matthew Dryer From ellen at CENTRAL.CIS.UPENN.EDU Mon Jan 6 02:38:42 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Sun, 5 Jan 1997 21:38:42 EST Subject: parsing In-Reply-To: Your message of "Sat, 04 Jan 1997 04:21:50 EST." Message-ID: i'm quite perplexed by matthew dryer's claim that people don't parse they just interpret... how do you envision going from the phonological or graphological (?) string to an interpretation without parsing? what about ambiguous sentences like: they can fish they saw the man with the telescope or the garden path types like: the horse raced past the barn fell or in fact any other sentence you can think of... obviously, no one assumes that people need to be able to consciously label items with parts of speech etc -- but clearly they know what's what -- and they would have to come out with SOMETHING that would be formally equivalent to a syntactic parse, it would seem. no??? btw, i'm in no way offering this as support for the parser hype we've just seen! From cleirig at SPEECH.SU.OZ.AU Mon Jan 6 03:19:26 1997 From: cleirig at SPEECH.SU.OZ.AU (Chris Cleirigh) Date: Mon, 6 Jan 1997 14:19:26 +1100 Subject: parsing Message-ID: The recent discussion on parsing suggests the possibility that "new functional is but old formal writ large" in America. chris From dryer at ACSU.BUFFALO.EDU Mon Jan 6 04:57:57 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Sun, 5 Jan 1997 23:57:57 -0500 Subject: Response to Ellen Prince Message-ID: Ellen Prince says >>i'm quite perplexed by matthew dryer's claim that people don't parse >>they just interpret I did not intend to imply this. I do believe that people parse, and that part of the process involves identifying syntactic structure. My point is that the process of parsing involves the identification of syntactic structure only as a means for determining the meaning, that the identification of syntactic structure per se is of no value as an end in itself, and that the real test of a parser is its ability to identify meaning in context, something that can only be tested by incorporating it in a system that can engage in conversation. In fact, not only do I believe that part of the process of parsing involves the identification of syntactic structure, but I believe that that is why languages have syntax. Some functionalists question the reality of syntax, largely, I suspect, because they don't see what function it would serve. Under my view, syntax makes the process of interpreting sentences easier than it otherwise would be, because the identification of syntactic structure assists in identifying the meaning. While I believe that pragmatic inference also plays a very significant role in interpretation, syntax allows some of the task of interpretation to be done in a simpler more automated fashion, reducing what might otherwise be an overwhelming demand on pragmatic inference. Matthew Dryer From lakoff at COGSCI.BERKELEY.EDU Mon Jan 6 06:07:25 1997 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Sun, 5 Jan 1997 22:07:25 -0800 Subject: Response to Ellen Prince Message-ID: If this were taking place on cogling, the discussion would be rather different. Given advances in cognitive semantics, a cognitive grammar (according to lancker, myself, and others, though not all cognitive linguists) uses no formal syntax at all. Rather is contains constructions that are direct pairings between aspects of cognitive semantics and their phonological expression in a given language. The old syntactic categories are semantic categories (in some cases radial categories). The old fashioned syntactic hierarchical structure is semantic hierarchical structure, and so on, as discussed in the various cognitive grammar literature. Since not everything has been analyzed in these terms yet, one cannot claim thatwe know this can be done, and some folks are more optimistic than others. But the question would be how to handle recalcitrant cases. Most people would agree that a "parse" yields an embodied cognitive semantic characterization of meaning as an output. In the evolving field of neural cognitive semantics (coming out of ICSI), we would have more stringent criteria for a "parser" -- that it be neurally realistic, done in structured connectionism, obey the hundred step rule, be performable in real time, be learnable, have an embodied semantics, be able to deal with blends (a la Fauconnier and Turner) and with metaphor, use plausible neural binding techniques, be able to deal with garden path sentences, be able to derive correct contextually appropriate inferences, and on and on. The general point, of course, is that what a "parse" is depends on the field you're in and on what assumptions you're making about what linguistics is. For this reason, a parser challenge doesn't make much sense, unless everybody agrees upon their theoretical assumptions, which doesn't seem to be the case in the funknet group. This is not to denigrate anybody's "parsing" efforts given whatever assumptions about linguistics they happen to like. There are few enough linguists and few enough people working on serious parsing efforts from whatever theoretical perspective that we ought to welcome all efforts, however diverse in theoretical perspective they seem to be. However, since this is not happening on cogling, I'll yield my two cents and leave the discussion to core funknetters. Happy New Year To All! George From Carl.Mills at UC.EDU Mon Jan 6 13:50:30 1997 From: Carl.Mills at UC.EDU (Carl.Mills at UC.EDU) Date: Mon, 6 Jan 1997 08:50:30 -0500 Subject: Reply to Matthew Dryer's reply to Ellen Prince Message-ID: In his reply to Ellen Prince Matthew Dryer says "My point is that the process of parsing involves the identification of syntactic structure only as a means for determining the meaning, that the identification of syntactic structure per se is of no value as an end in itself, and that the real test of a parser is its ability to identify meaning in context, something that can only be tested by incorporating it in a system that can engage in conversation." Whenever anyone uses the word *meaning* in a serious linguistic discussion, I put my hand on my wallet. Back in the 1930s, Ogden and Richards wrote an entire book called *The Meaning of Meaning* in which they concluded, if I remember correctly, that they thought maybe they didn't know. In other words, the word *meaning* has a meaning that is both so broad and so vague as to render *meaning* well nigh empty of empirical content. If we are going to use meaning to decide issues relating to syntax, including whether syntax exists, we need to agree on what we mean by *meaning*. Carl Mills From nuyts at UIA.UA.AC.BE Mon Jan 6 14:33:29 1997 From: nuyts at UIA.UA.AC.BE (Jan.Nuyts) Date: Mon, 6 Jan 1997 15:33:29 +0100 Subject: updated announcement bookseries Message-ID: *** Call for unpublished manuscripts *** - monographs or collected volumes - *** for a new book series *** HUMAN COGNITIVE PROCESSING An interdisciplinary series on language and other mental faculties Editors: Marcelo Dascal (Tel Aviv University) Raymond Gibbs (University of California at Santa Cruz) Jan Nuyts (University of Antwerp) Editorial address: Jan Nuyts University of Antwerp, Linguistics (GER) Universiteitsplein 1 B-2610 Wilrijk, Belgium e-mail: nuyts at uia.ua.ac.be Editorial Advisory Board: Melissa Bowerman (Psychology, MPI f. Psycholinguistics); Wallace Chafe (Linguistics, Univ. of California at Santa Barbara); Philip R. Cohen (AI, Oregon Grad. Inst. of Science & Techn.); Antonio Damasio (Neuroscience, Univ. of Iowa); Morton Ann Gernsbacher (Psychology, Univ. of Wisconsin); David McNeill (Psychology, Univ. of Chicago); Eric Pederson (Cogn. Anthropology, MPI f. Psycholinguistics); Fran?ois Recanati (Philosophy, CREA); Sally Rice (Linguistics, Univ. of Alberta); Benny Shanon (Psychology, Hebrew Univ. of Jerusalem); Lokedra Shastri (AI, Univ. of California at Berkeley); Dan Slobin (Psychology, Univ. of California at Berkeley); Paul Thagard (Philosophy, Univ. of Waterloo). Publisher: John Benjamins Publishing Company, Amsterdam/Philadelphia Aim & Scope: HUMAN COGNITIVE PROCESSING aims to be a forum for interdisciplinary research on the cognitive structure and processing of language and its anchoring in the human cognitive or mental systems in general. It aims to publish high quality manuscripts which address problems related to the nature and organization of the cognitive or mental systems and processes involved in speaking and understanding natural language (including sign language), and the relationship of these systems and processes to other domains of human cognition, including general conceptual or knowledge systems and processes (the language and thought issue), and other perceptual or behavioral systems such as vision and non-verbal behavior (e.g. gesture). `Cognition' and `Mind' should be taken in their broadest sense, not only including the domain of rationality, but also dimensions such as emotion and the unconscious. The series is not bound to any theoretical paradigm or discipline: it is open to any type of approach to the above questions (methodologically and theoretically) and to research from any discipline concerned with them, including (but not restricted to) different branches of psychology, artificial intelligence and computer science, cognitive anthropology, linguistics, philosophy and neuroscience. HUMAN COGNITIVE PROCESSING especially welcomes research which makes an explicit attempt to cross the boundaries of these disciplines. PLEASE SEND IN A RESUME BEFORE SUBMITTING THE FULL MANUSCRIPT ***** Jan Nuyts phone: 32/3/820.27.73 University of Antwerp fax: 32/3/820.27.62 Linguistics email: nuyts at uia.ua.ac.be Universiteitsplein 1 B-2610 Wilrijk - Belgium From nick at STL.RESEARCH.PANASONIC.COM Mon Jan 6 19:20:27 1997 From: nick at STL.RESEARCH.PANASONIC.COM (Nicholas Kibre) Date: Mon, 6 Jan 1997 11:20:27 -0800 Subject: collecting email Message-ID: Hi all; I'm working on a project which requires me to analyze text patterns in email, and am trying to put a corpus. If anyone out there would be interesting in contributing some old messages sitting in their mailbox, please forward them to: stltalk at stl.research.panasonic.com Somewhere a few months down the road, I should be able to report the results of this project. Anyway, any contributions would be much appreciated! Happy new year, Nick Kibre UC Santa Barbara Linguistics and Panasonic Speech Tech Lab nick at stl.research.panasonic.com From TGIVON at OREGON.UOREGON.EDU Tue Jan 7 03:31:02 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Mon, 6 Jan 1997 19:31:02 -0800 Subject: Get real, George Message-ID: 1-7-97 Dear FUNK people, I was going to hold my peace on the parser issue, which I suspect has provided all of you with much merriment, having noted that Matt Dryer basically got it right: Neither the extremist position of the Syntax-for-the-love-of-syntax Chomskyites, nor the extremist position of syntax-doesn't-exist it's-all-lexicon-and-discourse Functionalists are really consonants with the facts of language as we all know them. In other words: Yes, Virginia, there is syntax. But no, Virginia, it is not there as an autonomous, non-adaptive flower of a genetic megamutation. Honest, I was going to bite my tongue and leave be this time -- till I read George Lakoff condescending, gratuitous intervention. Even then, I would have still preferred that someone else do the honors. But then it dawned on me that perhaps I am in the best position than most to call George's bluff. You see, I've been watching George in action for thirty years now (La Jolla, Spring of 1967, "Is deep structure necessary?" boy the years sure fly). And over the years I have seen George get away with similasr fetes by remarkably similar means. That is, by insinuating that somehow the rest of y'all ignorant slobs better join his elect group of with-it cognoscienti or else you'll miss the (latest) boat. So let us see what it would means to the many of us who have been looking at grammar/syntax from a variety of perspectives if it now turned out tha syntax does not really exist. And that at any rate, it plays no functional role in mediating between the cognitive-lexical- communicative levels and the phonetic output. To wit: 1. GRAMMATICALIZATION AND SYNTACTIC CHANGE: We have been describing how pre-grammar or non-grammar (lexicon cum parataxis) changes into grammar (syntax, morphology), how lexical items grammaticalized into morphology, how clauses that used to come under separate intonation contours somehow condense themselves under a single contour. We thought we were studying something real, poring over successions of older texts in arcane languages, wretling with internal reconstruction in unwritten languages. But we've been deluding ourselves all along -- says George. Forget it, good non-with-it FUNK folks. It's not really there. And if it is, no matter, it DOES NOTHING. 2. ACQUISITION: We have been describing how children move gradually from pre-syntactic (pre-grammatical) communication to grammaticalized communication, adding a morpheme here, an embedded construction there; stabilizing rigid grammatical(ized) word-order where previously only semantically-based (AGT oriented) or pragmatically-based (TOPIC oriented) word-order could be found; gaining embedded constructions, gaining de- transitive clauses, gaining fancy subject-inversion with auxiliaries; etc. etc. Forget it, folks, -- says George. It's been all for naught, what you've beel studying is plainly a mirage. And just in case it did exist, it doesn't matter. Because -- it turns out -- it serves no function whatever. "We", the cognoscienti, have already "demonstrated" that we "can do it all" without grammar. Nice try, George. But how come the kids are still insisting on doing what they're doing? Are they, like us, deluded too? 3. PIDGINS AND CREOLES: We have been observing for a long time the peculiar consequences of having no morpho-syntax; that is, of pre- grammatical (pidgin) communication. The halting, repetitious, error- prone, frustrating communicative mode of pidgin is familiar to many of us from early childhood studies. Doesn't Sue Erwin-Tripp work at UC Berkeley? Doesn't Dan Slobin? Isn't Ron Scollon's dissertation still available? Or Liz Bates? Or Eli Ochs' early works? Or Developmental Pragmatics (1979)? How about the extensive literature on Broca's Aphasia communication? Lise Menn's/Loraine Obler's magnificent 3-volume collection? And isn't the data of second language pidgin available? What is it that makes grammaticalized language -- such as the Creoles of children of Pidgin-speaking parents -- so much more fluent, fast-flowing, streamlined? What has been subtracted between the Creole and the Pidgin? According to George, nothing. But just in case it was something, forget it too. It serves no purpose. "We" can do without it. Well, as a person who have gone through the agony of moving from pidgin to grammaticalized commu- nication five distinct times (and hated it with passion every time...), I'd like to know why nobody had ever told me that I was wasting my precious time? That I was much better off mapping directly from cognitive to phonetic structure? Might have saved me years of toil and agony. 4. CROSS-LANGUAGE TYPOLOGICAL VARIATION: We have been observing how the very same cognitive-semantic-communicative function can be executed in different languages by a (relatrively small, mind you) number of syntactically distinct constructions. We've also noted that those cons- tructions represent distinct diachronic pathways of grammaticalization. We've seen this with complementation, with relative clauses, with passives, inverses, anti-passives, clause-chaining types, tense-aspect-modal systems, negation -- you bloody name it, we've been observing it. But, sorry you simpleton FUNK folks. George says that -- it turns out -- what y'all been documenting so laboriously is -- right, folks -- nothing. And if indeed it is still something, nevermind; because you see, it is there for no purpose whatever. "We" can do without it. 5. DISCOUSE: Here's the real bad news, folks. You've been studying for 30 years now how syntactic/grammatical constructions have specific communicative function paired with them, systematically, intimately. What a cxolossal waste of energy. You see --it turns out, George says -- that morpho-syntax doesn't really exist. What you should have been really studying all along, it turns out (you hear this -- Wally? Liz? Sandy? Russ? Jack? Barbara? Matt? John? Bob? Anna?) is how communicative function maps directly onto phonetic structure. Directly folks, directly. 6. COGNITIVE PSYCHOLINGUISTICS: Forget it Russ Tomlin, sorry, Morti Gernsbacher, your loss, Brian MacWhinney, butt of, Walter Kitsch, you're past, Tony Sanford, get lost, Liz Bates. All your labor has been for naught. George has decreed your experiments null and void, whatever it is your studying just doesn't exist. Or worse, you simpleton fukn-folks, it is there for no purpose. 6. NEUROLOGY: We all know localization is a complicated, that grammar in adults is distributed across many "modules". Sure, the modules bear little resemblance to their Jerry Fodor name-sakes. They interact, they talk to each other, they are NOT encapsulated, they collaborate with "cognitive" modules (attention, activation, memory, intention, pragmatic zooms, etc.). But however widely distributed, portions of this complex mechanism can be knocked out selectively by lesions. What is it, George, that aphasics have lost, exactly? You study their transcribed discourse (Menn and Obler, eds 1990, e.g.), and you notice that (i) the lexicon is there. nouns, verbs, adjectives. (ii) the coherence of discourse is still there (referential coherence, temporal coherence, all the measurables). So what is it that is NOT there? We used to think it was morphology and syntactic constructions. But it turns out, George now says, no go. Whatever we thought it was is not really there, never really was to begin with. And if for some reason it turns out it was, still no matter; it performed no function. So, one wonders, how come the poor slobs in the wards are having such a hard time stringing lexical items together into clauses and clauses into discourse? The most gratuitous insult, I must confess, is George's reference to "neural networks" and connectionism. This is a rather poor substitute for real neurology, which is vast, complex, frustrating and cannot be practiced by erzatz experts in search of the latest fad. When I ask the real neurologists I know what they think of connectionism, I get an incomprehension response. Never heard of it. Con what? So the exhortation to join the bandwagon before it leaves the station and we're stranded for good rings rather hollow. Especially this undignified business of "in real time". So, if nobody else has yet, I guess I must tell George that the "thing" that makes it possible for humans to process language at the rate of, roughly, 250 msecs per word and 1-3 seconds per idea (clause, proposition, event/state frame, intonation unit...) is called grammar/syntax. 7. EVOLUTION: I have saved this one for last, since in some funny way it remains the crux of the matter. Here's the real puzzle: Why should this manifestly existing, acquired, diachronically-changing, cognitively manipou- lable, neurologically-based entity called "grammar/syntax", with its complex, imperfect but nonetheless clearly manifest ICONICITY to semantic and pragmatic function(s) -- why should it ever evolve? With its marvelous hierarchic design, parts fitting into larger parts; why should this extravaganza ever evolve in the first place? According to George, the extravagance doesn't really exist. Presumably then, there's no difference between lexicon and morphology (forget what you've been observing, you gramma- ticalization hounds); no difference between parataxis and syntax (again, forget your puny facts you poor un-enlightened souls, you un-cogged simpletons). But, just in case it did exist -- nevermind. It's there for no reason, it just happened to evolve, somehow, for the love of God or Descartes. For those of us with ears that are yet undulled by the clamor to be with-it, the strage ghost deja-vu is now creeping in: Hey, but this is what Chomsky has been saying all along about language evolution -- that it is a mysterious saltation, unguided by adaptive (communicative, cognitive) behaviour. I have probably said too much already. I always live to regret getting involved in these silly affairs. But I think if there is one lesson to be learned from this, it is perhaps that nothing comes cheap in real science. You can't do complex biologically-based science on the fly. If one intends to talk to only God or yourself or the Elect, that's certainly one's privilege. But if you want to be taken seriously, well get serious first. Like, get real. Happy New Year y'all, TG From bresnan at CSLI.STANFORD.EDU Tue Jan 7 04:16:04 1997 From: bresnan at CSLI.STANFORD.EDU (Joan Bresnan) Date: Mon, 6 Jan 1997 20:16:04 -0800 Subject: Get real, George In-Reply-To: Your message of Mon, 06 Jan 1997 19:31:02 PST. <01IDWK5CL2KI935AF8@OREGON.UOREGON.EDU> Message-ID: TG: Bravo! Normally I just lurk here, but I can't help observing how much of what TG says in defense of grammar/syntax and its rich functionality I agree with. --Joan From dever at VERB.LINGUIST.PITT.EDU Tue Jan 7 13:33:29 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Tue, 7 Jan 1997 08:33:29 -0500 Subject: Get real, George In-Reply-To: <199701070416.UAA21681@Turing.Stanford.EDU> Message-ID: I must agree completely with Joan's evaluation of TG's posting. Absolutely a great posting. I find Tom's position one of the most attractive in linguistics - one that Pike and Hockett and others argued for convincingly as well. Language is a form-function composite (to use a phrase of Pike's). Both the form and function parts are rich. DLE ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From PDeane at DATAWARE.COM Tue Jan 7 14:31:00 1997 From: PDeane at DATAWARE.COM (Paul Deane) Date: Tue, 7 Jan 1997 09:31:00 -0500 Subject: Get real, George Message-ID: Whoa, folks! Why do we have to get polarized here? Call me naive if you like, but I didn't see George's post as denigrating functional work. And I certainly don't see putting words into his mouth as fair. He specifically said what things he would like to see. Granted, it's not what a lot of people on FUNKNET want to focus on, but does that turn it automatically into an attack? Do we have to base our discussions on personalities? More to the point, if people feel they have to take such a post as an attack, it would help if the discussion were moved to a level that would generate light and not heat. I have a certain stake in the matter since I've written a book couched in a "cognitive" framework (Grammar in Mind and Brain: Explorations in Cognitive Syntax, Mouton de Gruyter 1993) in which I refer to and attempt to incorporate practically everything on Tom Givon's list ... aphasia ... experimental psycholinguistics .... cross-linguistic typological hierarchies ... lots of things. I don't see a contradiction. And I don't see the point of making this into a zero sum game. I've seen enough flame discussions elsewhere. Please, let's not have one here. From TWRIGHT at ACCDVM.ACCD.EDU Tue Jan 7 15:39:20 1997 From: TWRIGHT at ACCDVM.ACCD.EDU (Tony A. Wright) Date: Tue, 7 Jan 1997 09:39:20 CST Subject: Get real, George Message-ID: Paul Deane wrote: > I've seen enough flame discussions elsewhere. Please, let's not have one > here. I agree that flames per se are not helpful, but I was quite thrilled, to be honest, with the actual discussion of a linguistic issue on FUNKNET after months of nothing but conference announcements. The only thing that kept me from unsubscribing was how easily forgotten my subscription to FUNKNET was due to the low volume of mail. I have been trying to get a similar sort of discussion (but more civil) going on the GB2MP list for Chomskyan syntax, which I moderate. It has been an uphill battle, but we have managed to have some very interesting and substantive back-and-forth about linguistic issues (as opposed to calls for papers and conference announcements). Our list traffic remains fairly light, however. I was thrilled to see discussions of the merits of parsers (and more thrilled still to see the automony of syntax) discussed in this forum I'd like to do anything I can to encourage an on-line scholarly dialogue between linguists (within norms of civility). Tony Wright Moderator, GB2MP, a list for issues in Chomskyan syntax To subscribe, send the message: subscribe gb2mp to the address: majordomo at colmex.mx From jrubba at HARP.AIX.CALPOLY.EDU Tue Jan 7 18:43:32 1997 From: jrubba at HARP.AIX.CALPOLY.EDU (Johanna Rubba) Date: Tue, 7 Jan 1997 10:43:32 -0800 Subject: Get real, George In-Reply-To: Message-ID: I've been enjoying the back-and-forth of this discussion, but could someone please enlighten me as to why I am getting all these messages twice?? As all you busy people know, every second counts -- including the 2-3 secs. it takes to delete that extraneous repeat message ;-) Happy New Year all; this list has awakened with a bang! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Johanna Rubba Assistant Professor, Linguistics ~ English Department, California Polytechnic State University ~ San Luis Obispo, CA 93407 ~ Tel. (805)-756-2184 E-mail: jrubba at oboe.aix.calpoly.edu ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From lakoff at COGSCI.BERKELEY.EDU Tue Jan 7 20:00:40 1997 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Tue, 7 Jan 1997 12:00:40 -0800 Subject: Biological and cognitive realism Message-ID: Dear Tom, I'll start with something I forgot to mention. Dan Jurafsky's parser (see his Berkeley thesis) is set up to accord with the psycholinguistic data on language processing. It uses constructions that include semantic and pragmatic information. Those not familiar with Dan's work should be. In addition, there's going to be a conference at ICSI at Berkeley after Cogsci next August on psycholinguistically-based language processing. Dan JUrafsy and Terry Regier are running it. The point is that there is a community of computational linguists who have higher standards than just getting tree parses. Now back to your posting. Tom, I said grammar does exist. As has Langacker. What we say is that it exists as constructional pairings of cognitive semantics and phonological representations (word order included). The exact nature of grammar is an empirical question. We argue our position empirically. See, for instance, my CLS 86 paper on the frame semantic control of the coordinate structure constraint, where I argue that there is no autonomous syntactic coordinate structure constraint. If you want some details, take a look at Ron's Foundations of Cognitive Grammar (about 1000 pages of details), as well as the rest of the literature in the field. If you want a real short intro, start with Ron's Concept, Image and Symbol. Also, I can't imagine how you could possibly have read the last 120 pages of Women, Fire and Dangerous Things -- the most extensive study of there-constructions ever done by far -- and still think I don't believe in grammatical constructions. Try reading it for evidence for constructions and against autonomous sytax. What I said was that AUTONOMOUS syntax didn't exist. That's a very very different claim than that grammar does not exist, which is ridiculous. It seems to be obviously true that syntax is not purely autonomous. If syntax were autonomous, it could take into account no input from anything else, such as meaning, general cognition,perception, processing considerations, etc. In short, it would be as Chomsky has always represented it, a grammar box with no input, only output. This is necessary if Chomsky's basic metaphor is to be accepted, namely, that a sentence is a sequence of formal symbols, a language is a set of such sequemces, and grammars are devices for generating such sets. The theory of formal grammars requires that rules be stated only in terms of symbols in the grammar. In short, no external input that is not in the formal language can be looked at by the rules of a formal grammar. Such a view, if it is to be instantiated in the brain would require a brain module (or some complex widely distributed neural subnetwork) WITH NO INPUT! But there is nothing in the brain with no input. Such a view is biologically impossible. What we have proposed instead is that grammar is based on other nonlinguistic cognitive abilities and that neural connections and bindings bring about grammar. We try to show exactly how. I suggest you read the discussion by Gerald Edelman in Bright Air, Brilliant Fire called "Language: Why The Formal Approach Fails", starting on p. 241 for a neuroscientist's view of the issue. Edelman is director of the Neuroscience Institute at Scripps in La Jolla. Next, I was not attacking functionalists at all. I happen to like and teach much functionalist research. I think functionalism has contributed a great deal. Indeed, I was not attacking anybody. As I said, I think there are few enough linguists doing serious research in any school, and despite theoretical differences, lots of folks have lots of things to teach us. > >1. GRAMMATICALIZATION AND SYNTACTIC CHANGE: Cognitive linguists have been working on grammaticalization and syntactic change for quite a while. But it requires a nonautonomous view of grammar. Traugott has written for decades on the need for pragmatics and metonymy is accounting for grammaticalization. Kemmer has argued overwhelmingly that semantics (of a cognitive rather than formal nature) is required for grammaticalization. Heine has been arguing that conceptual metaphor is necessary to understand grammaticalization. That has been confirmed in great detail by Sarah Taub, in one of the most extensive studies of grammaticalization to date -- on Uighur. Any serious look at the grammaticalization literature in recent years will see the role of cognitive semantics -- especially metonymy and metaphor -- in grammaticalization. >2. ACQUISITION: As Dan Slobin has shown, the semantics precedes the >acquistion of much of grammar. As for the acquisition of semantics, there I suggest you read Terry Regier's THE HUMAN SEMANTIC POTENTIAL, MIT Press, 1996. Regier's acquistion model shows that spatial relations concepts and terms (not counting 3D and force-dynamics) in asignificant range of the world's languages can be accomplished on the basis of structured neural models -- with the structure given by models of neural structures known to exist in the brain: topographic maps of the visual field, orientation-sensitive cells, center-surround receptive fields, and so on. The point is that NONLINGUISTIC aspects of brain structure can be used to learn conceptual and linguistic elements. Regier also shows how this can be done with NO NEGATIVE INPUT. Regier did his work in our (Feldman's and my) group several years ago. For recent work on related topics from the ICSI Lzero group, check out the Lzero website at icsi.berkeley.edu/lzero. Other topics have included the learning of verbs of hand motion, aspect, and metaphor. The point is to do this in a way that is biologically and cognitively realistic. > >3. PIDGINS AND CREOLES: We have been observing for a long time the >peculiar consequences of having no morpho-syntax Ron and I of course recognize morphosyntax. We just give a nonautonomous theory of it. I happen to like most of the work you cited, much of which is not set in terms of autonomous syntax. >4. CROSS-LANGUAGE TYPOLOGICAL VARIATION: As Bill Croft, Suzanne Kemmer, Ron Langacker and others have been writing for years, a linguistics based in cognitive semantics does a better job at getting at cross-linguistic typological variation. Functionalist studies of classifier languages (Collete Craig) and speech act indicators (Nichols and Chafe) show that such typological cannot be done with an autonomous syntax that does not admit semantic factors. > >5. DISCOURSE: Again, Ron and I support a nonautonmous theory of grammar and of constructions, Our work fits very will with the work you cite by Wally, Sandy, Jack, and other of our friends and colleagues. I can't imaginine any of them supporting an autonomous syntax. Indeed, the whole idea of the CSDL (Conceptual Structure, Discourse and Language) conferences was to bring together the cognitive and functionalist approaches into a unified group. The first two conferences were enormously successful. I know you didn't attend either, Tom, but why come to this year's conference at Boulder. It should be a good as the the others. For those who have bought it yet, I recommend the proceedings of the first conference, CONCEPTUAL STRUCTURE, DISCOURSE, AND LANGUAGE (edited by Adele Goldberg and published by CSLI publications, distributed by Cambridge University Press). > >6. COGNITIVE PSYCHOLINGUISTICS: Same point again. The Nijmegen group has >been studying universals of spatial relations, which requires a form of >cognitive semantics (not formal semantics or formal syntax). Brian and Liz >have been arguing for years against autonomous syntax and the language >box. For an extensive study from cognitive psycholinguistics supporting the results of cognitive semantics, read Ray Gibbs' THE POETICS OF MIND, published by Cambridge University Press. If those experiments don't convince you, I don't know what will. >6. NEUROLOGY: We all know localization is a complicated, that grammar >in adults is distributed across many "modules". Sure, the modules bear >little resemblance to their Jerry Fodor name-sakes. They interact, they >talk to each other, they are NOT encapsulated, they collaborate with >"cognitive" modules (attention, activation, memory, intention, pragmatic >zooms, etc.). Exactly the point. No syntax module without input from the reset of the brain, hence no autonomous syntax. But however widely distributed, portions of this complex >mechanism can be knocked out selectively by lesions. What is it, George, >that aphasics have lost, exactly? The Damasios have answered that: Connections. Not localized modules. You study their transcribed discourse >(Menn and Obler, eds 1990, e.g.), and you notice that > (i) the lexicon is there. nouns, verbs, adjectives. > (ii) the coherence of discourse is still there (referential coherence, > temporal coherence, all the measurables). >So what is it that is NOT there? Again, connections. Drop a note to Liz Bates at UCSD for all her many surveys of arguments against the syntax module. Agrammatic patients have been shown to be able to make grammaticality judgments (Linebarger, et al). Liz cites her own work with an agrammatic patient in Italy ( a well-educted architect) who could not repeat a grammatical sentence, but could only say one word: the Greek grammatical term for the syntactic phenomenon! The point: It is now well established that agrammatism is not the wiping out of a supposed "syntax module". About connectionism, there is a big big difference between PDP connectionism and structured connectionism. I was explicitly talking about the latter. The former cannot account for most linguistic phenomena. > >7. EVOLUTION: Your remarks about iconicity attest to the inadequacy of autonomous syntax. Iconic constructions require the pairing of form and meaning. That cannot be done in autonomous syntax. For a discussion in my work, see Chapter 20 of Metaphors We Live By. A magnificent study of the role of cognition, especially cognitive semantics, in iconicty is now in progress -- Sarah Taub's dissertation on ASL. If you have never heard Sarah talk on the subject, you should. Invite her up to Oregon as soon as possible! In the course of evolution, layers have been added to the brain. Higher cognition is done in the neocortex, which is furthest from direct bodily input and which takes input from layers closer to bodily input. The study of conceptual metaphor show that that huge system is grounded in the perceptual and motor system and that abstract concepts tend to be conceptualized in bodily terms. This is just what one would predict given the evolutionary structure of the brain. Tom, your posting was useful because all the topics you mentioned are important, and indeed support the cognitive position (and other work on nonautonomous grammar, like most functionalist work). I'm sorry you didn't understand what I was saying. I've written a lot on the subject, so I thought it would be clear, but maybe it wasn't for those not into the cognitive literature. For that, I apologize. I hope this clarifies the position. I'm also glad that Joan and Dan are really into your work Tom. I hope that they been reading other functionalists, and maybe they'll get to the cognitive literature too that way. I agree with Tony Wright. Serious discussion is needed. It should be based on serious reading, of course. Gotta get back to book writing. Take care, Tom, and Happy New Year. You're invited for a beer next time you're in Berkeley or I'm in Eugene or if you decide to go to Boulder for CSDL. Let's try to talk things out calmly and in detail. Best wishes, George From lakoff at COGSCI.BERKELEY.EDU Tue Jan 7 20:16:55 1997 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Tue, 7 Jan 1997 12:16:55 -0800 Subject: REMINDER: CSDL 97 Abstract Deadline Message-ID: >Date: Mon, 6 Jan 1997 17:33:33 -0700 (MST) >From: "Laura A. Michaelis" >To: cogling at ucsd.edu >Subject: REMINDER: CSDL 97 Abstract Deadline >Mime-Version: 1.0 >Status: > > >REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER REMINDER >*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97*CSDL97* > >The abstract submission deadline for CSDL '97, to take place at the >University of Colorado, Boulder May 24-26, 1997, is: > >JANUARY 17, 1997 > >SEND ABSTRACTS TO: > >CSDL Abstracts >Department of Linguistics CB 295 >University of Colorado >Boulder, CO 80309 > >Email submission is strongly encouraged. SEND *EMAIL* ABSTRACTS TO: > >csdl at babel.colorado.edu > >For further information on the conference and on abstract submission, see >our website: > >http://stripe.colorado.edu/~linguist/CSDL.html > > *** >Laura Michaelis >Dan Jurafsky >Barbara Fox > >CSDL 97 Program Committee From TGIVON at OREGON.UOREGON.EDU Tue Jan 7 20:49:26 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Tue, 7 Jan 1997 12:49:26 -0800 Subject: etc. Message-ID: Dear George, Thanks for your gracious response to my (rather) temperamental outburst. It seems like we have zero disagreement. You don't believe that syntax does not exist. I don't believe (never have, as you know) that syntax is autonomous. That is, after all, a criterial feature for functionalists. So if it was all a misunderstanding, let us kiss and make up. I do tend to let my temper fly on occasion. I think you're absolutely right, the most important thing is that people keep communicating and exchanging ideas, As you know, I've benefited enormously from your work, and I hope to continue to. I think part of the problem is the creatin of "labeled" sub-cultures that then tend to talk primarily to themselves. To my the label Cognitive Linguistics/Grammar somehow connotes that the rest of us are not cognitively oriented. This is what adjectives do, they tend to imply restriction. As you may have noticed, I have always refrained from putting a label on my work, or incorporating an official group. I continue to believe that what I (and we...) do is just plain linguistics, the unmarked case. I know this is not a politically popular attitude, but I find the proliferation of "labeled" groups (the alphabet soup...) rather undig- nified. Thanks again for your graciousness, George. And Happy New Year. TG From dquesada at CHASS.UTORONTO.CA Wed Jan 8 01:44:25 1997 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Tue, 7 Jan 1997 20:44:25 -0500 Subject: A non-issue of an important issue. In-Reply-To: <4D452461AF6@UCENGLISH.MCM.UC.EDU> Message-ID: On Mon, 6 Jan 1997 Carl.Mills at UC.EDU wrote: > Whenever anyone uses the word *meaning* in a serious linguistic > discussion, I put my hand on my wallet. That is nothing but (formalist?) prejudice. > Back in the 1930s, Ogden and Richards wrote an entire book called *The > Meaning of Meaning* in which they concluded, if I remember correctly, > that they thought maybe they didn't know. In other words, the word > *meaning* has a meaning that is both so broad and so vague as to render > *meaning* well nigh empty of empirical content. You are right, back in the 1930s. But this is the 1990's!!!, almost the 21st. C. A.D. Surely some progress must have been made during these 60-70 years, don't you think? That *meaning* is not reducible to something like: M = x + y/-r, etc. does not mean that our intuition (as linguists and speakers) and common sense cannot guide us when making analyses and claims about language. > If we are going to use meaning to decide issues relating to syntax, > including whether syntax exists, we need to agree on what we mean by > *meaning*. Ubi supra. The fact that we talk about the meaning of a lexeme that grammaticalizes or the meaning of a certain syntactic structure, etc. etc. is enough proof that we know what we mean by meaning. I cannot understand what the reason for complicating matters superfluously is. And though nobody seemed interested in responding to this, in my view, non-issue (maybe thereby showing that it is indeed a non-issue) I felt that just for the record a reply was in order. J. Diego Quesada University of Toronto From nuyts at UIA.UA.AC.BE Wed Jan 8 15:07:19 1997 From: nuyts at UIA.UA.AC.BE (Jan.Nuyts) Date: Wed, 8 Jan 1997 16:07:19 +0100 Subject: updated announcement bookseries Message-ID: This is an updated announcement of a new bookseries, first mailed on this list a few months ago. Please Post. *** Call for unpublished manuscripts *** - monographs or collected volumes - *** for a new book series *** HUMAN COGNITIVE PROCESSING An interdisciplinary series on language and other mental faculties Editors: Marcelo Dascal (Tel Aviv University) Raymond Gibbs (University of California at Santa Cruz) Jan Nuyts (University of Antwerp) Editorial address: Jan Nuyts University of Antwerp, Linguistics (GER) Universiteitsplein 1 B-2610 Wilrijk, Belgium e-mail: nuyts at uia.ua.ac.be Editorial Advisory Board: Melissa Bowerman (Psychology, MPI f. Psycholinguistics); Wallace Chafe (Linguistics, Univ. of California at Santa Barbara); Philip R. Cohen (AI, Oregon Grad. Inst. of Science & Techn.); Antonio Damasio (Neuroscience, Univ. of Iowa); Morton Ann Gernsbacher (Psychology, Univ. of Wisconsin); David McNeill (Psychology, Univ. of Chicago); Eric Pederson (Cogn. Anthropology, MPI f. Psycholinguistics); Fran?ois Recanati (Philosophy, CREA); Sally Rice (Linguistics, Univ. of Alberta); Benny Shanon (Psychology, Hebrew Univ. of Jerusalem); Lokedra Shastri (AI, Univ. of California at Berkeley); Dan Slobin (Psychology, Univ. of California at Berkeley); Paul Thagard (Philosophy, Univ. of Waterloo). Publisher: John Benjamins Publishing Company, Amsterdam/Philadelphia Aim & Scope: HUMAN COGNITIVE PROCESSING aims to be a forum for interdisciplinary research on the cognitive structure and processing of language and its anchoring in the human cognitive or mental systems in general. It aims to publish high quality manuscripts which address problems related to the nature and organization of the cognitive or mental systems and processes involved in speaking and understanding natural language (including sign language), and the relationship of these systems and processes to other domains of human cognition, including general conceptual or knowledge systems and processes (the language and thought issue), and other perceptual or behavioral systems such as vision and non-verbal behavior (e.g. gesture). `Cognition' and `Mind' should be taken in their broadest sense, not only including the domain of rationality, but also dimensions such as emotion and the unconscious. The series is not bound to any theoretical paradigm or discipline: it is open to any type of approach to the above questions (methodologically and theoretically) and to research from any discipline concerned with them, including (but not restricted to) different branches of psychology, artificial intelligence and computer science, cognitive anthropology, linguistics, philosophy and neuroscience. HUMAN COGNITIVE PROCESSING especially welcomes research which makes an explicit attempt to cross the boundaries of these disciplines. PLEASE SEND IN A RESUME BEFORE SUBMITTING THE FULL MANUSCRIPT ***** Jan Nuyts phone: 32/3/820.27.73 University of Antwerp fax: 32/3/820.27.62 Linguistics email: nuyts at uia.ua.ac.be Universiteitsplein 1 B-2610 Wilrijk - Belgium From Carl.Mills at UC.EDU Wed Jan 8 15:29:05 1997 From: Carl.Mills at UC.EDU (Carl.Mills at UC.EDU) Date: Wed, 8 Jan 1997 10:29:05 -0500 Subject: A non-issue of an important issue Message-ID: J. Diego Quesada of the University of Toronto writes, in part, " You are right, back in the 1930s. But this is the 1990's!!!, almost the 21st. C. A.D. Surely some progress must have been made during these 60-70 years, don't you think? That *meaning* is not reducible to something like: M = x + y/-r, etc. does not mean that our intuition (as linguists and speakers) and common sense cannot guide us when making analyses and claims about language." and a bit later: "The fact that we talk about the meaning of a lexeme that grammaticalizes or the meaning of a certain syntactic structure, etc.etc. is enough proof that we know what we mean by meaning. I cannot understand what the reason for complicating matters superfluously is." I don't want to waste a lot of bandwidth on what is clearly a side issue, but it is statements like these that make communication difficult between functionalists and those of us who are not functionalists. The first passage quoted above seems to equate the passing of time with progress. And not very much progress has been made since the 1930s. As for "common sense," well, for a long time common sense had a lot of people convinced that the world was flat. The second passage contains an "argument" that is so vulnerable to a reductio ad absurdam that I hate to get into it. But late-19th-century physicists talked about the luminiferous ether without knowing what they meant by it--without knowing, in fact, that it didn't exist. We could add phlogiston, the philosopher's stone, and Bergson's elan vital. Carl From dick at LINGUISTICS.UCL.AC.UK Thu Jan 9 12:16:12 1997 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Thu, 9 Jan 1997 12:16:12 +0000 Subject: autonomous syntax Message-ID: Like Joan Bresnan I normally just `lurk' on this list, but it's been so interesting of late that I can't resist coming out of the shadows. I don't think I understand what George means by autonomous syntax (which he rejects), nor about whether he is thereby rejecting syntax in general (as opposed to grammar, which he certainly does accept). Here's his first statement: >Tom, I said grammar does exist. As has Langacker. What we say is that it >exists as constructional pairings of cognitive semantics and phonological >representations (word order included). Taken at face value, this seems to say that there are just two linguistic levels, semantics and phonology (just like Chomsky's two interfaces, in fact!). Nothing between meanings and syllables, not even words. Take English verbs, for example. How do we say that `future meaning' maps onto /wil/ (or some such), whereas past maps onto /d/ (or some such), and that these bits of phonology are on opposite sides of the bits that express the lexical meaning? Or that "will" may be separated from the lexical bit by the subject etc etc etc? Notice that meaning may not be relevant; e.g. possessive HAVE has the same meaning whether it's used as an auxiliary verb or as a full verb (1 vs 2). (1) Have you a car? (2) Do you have a car? The generalisations that distinguish auxiliary and full verbs are `autonymous', in the sense that they refer to words, word-classes and syntactic relations, without mentioning meaning (or phonology). But I certainly believe that its function is to help hearers and speakers handle meaning (functionalism), and that the way in which we organise the information in our brains is in terms of prototype-like structures (cognitivism). Do I accept or reject autonymous syntax? Am I a formalist, a functionalist, a cognitivist, or just confused? Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From lgarneau at HOTMAIL.COM Thu Jan 9 14:33:13 1997 From: lgarneau at HOTMAIL.COM (Luc Garneau) Date: Thu, 9 Jan 1997 14:33:13 -0000 Subject: deixis and demonstrative pronouns Message-ID: Hello All - I hate to interrupt the discussion of syntax, but I have been thinking and writing about deixis a lot, in particular regarding the different functions of the demonstrative pronouns "this" and "that". While I have had relatively little difficulty finding work on this concept in general (Halliday & Hasan - Cohesion in English had a nice section), I have had trouble finding a whole lot further dealing specifically with these words...can anyone recommend anything? Thanks very much for any help! Luc Garneau --------------------------------------------------------- Get Your *Web-Based* Free Email at http://www.hotmail.com --------------------------------------------------------- From harder at COCO.IHI.KU.DK Thu Jan 9 15:32:42 1997 From: harder at COCO.IHI.KU.DK (harder at COCO.IHI.KU.DK) Date: Thu, 9 Jan 1997 16:32:42 +0100 Subject: syntax and form-meaning pairs (=signs) Message-ID: This is not the first time that a discussion about syntax in relation to cognitive linguistics creates problems of understanding, in spite of the fact that neither functionalists nor cognitive linguists believe in autonomous syntax, and both believe in the existence of grammar. In order to make it clear that one can want to talk about syntax and still be dealing with semantic phenomena I have suggested the term 'content syntax' for those combinatorial relations that create larger meanings out of component meanings (e.g. the head-modifier relation, etc). When one is talking about content syntax, the issue is neither individual form-meaning pairs nor the chimera of autonomous syntax. This is important in relation to a point made in George Lakoff's second message. As I understand it, it seems to imply that if one agrees that there syntax can be described in terms of form-meaning pairs, there is no need to talk about the special properties of syntax. But the ability to combine meaning fragments into larger wholes does, from an evolutionary as well as a neurological point of view, seem to be rather a special skill. Saying that it is distributed over the brain and that it depends on connections rather than solely on a specific brain area does not appear to capture its special nature very precisely. The best way to show the superiority of a non-automous approach to syntax must be to show how the special nature of the ability to create complex expressions can be captured in a framework where the combimation of meanings is the essential part. But this requires recognizing that the combinatory skill has properties that are different from the ability to associate form and meaning in a holophrastic sign. Mechanisms of combination do not disappear as a special problem in its own right, even if it is non-autonomous and involves form-meaning pairing. Peter Harder, U of Copenhagen From fjn at U.WASHINGTON.EDU Thu Jan 9 21:50:57 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Thu, 9 Jan 1997 13:50:57 -0800 Subject: autonomous syntax In-Reply-To: <9701091059.AB23955@crow.phon.ucl.ac.uk> Message-ID: Along with some of the other contributors to the discussion of syntax, grammar, and autonomy, I've decided to stop lurking in the woodwork. I think that I can speak for a majority of 'orthodox' generative grammarians when I assert that the question of the autonomy of syntax (AS) has nothing whatever to do with the 'fit' between (surface) form and meaning. Thus George's and Talmy's critiques of autonomy are not to the point. The AS hypothesis is not one about the relationship between form and meaning, but rather one about the relationship between 'form and form'. AS holds that a central component of language is a *formal system*, that is, a system whose principles refer only to formal elements and that this system is responsible for capturing many (I stress 'many', not 'all') profound generalizations about grammatical patterning. Let's take the most extreme 'cognitive linguistics' position, held, I think, by Anna Wierzbicka. According to this position, any particular observable formal aspect of language (e.g. categories, constructions, morphemes, etc.) can be characterized by necessary and sufficient semantic conditions. Is this position compatible with AS? Certainly it is. One need only go on to show that grammatical patterning is also to a large degree governed by more abstract relationship among formal elements that are not replaceable by statements whose primitive terms are semantic. A brilliant demonstration to this effect (and hence support for AS) has been provided by Matthew Dryer in an article in LANGUAGE. Dryer shows that the underlying generalization governing the Greenbergian word order correlations is not a semantic one (e.g. head-dependent relations or whatever), but rather the *principal branching direction* of phrase structure in the language. The two often are in accord, of course; where they conflict it is the abstract structural relationships provided by formal grammar that win out. In an in-preparation work, I argue that this is the norm for language. Yes, the fit between surface form and meaning is quite close. But yes, also, formal patterning has a 'life of its own', as is asserted by AS. Since we have also heard it claimed that the apparent nonlocalizability of syntax in the brain refutes AS, I'd like to address that question too. What precisely is implied by the claim that we are endowed with an innate UG module? Among other things, presumably that there are innate neural structures dedicated to this cognitive faculty. However, nothing whatever is entailed about the *location* of these neural structures or their degree of 'encapsulation' with respect to other neural structures. Perhaps they are all localized in one contiguous area of the brain. On the other hand, they might be distributed throughout the brain. It simply does not matter. Yet any number of critiques of AS have attempted to refute the idea of an innate UG when, in fact, they have done no more than refute a localist basis for it. One might object that Chomsky has invoked the image of the 'language organ' on a number of occasions, which has the effect of implying that the neural seat of UG must be localizable in some part of the brain. But it seems clear that his use of that expression is based on his hypothesis that it is determined by a genetic blueprint, not on its physical isolability. For example, he asserts that 'language is to be thought of on the *analogy* of a physical organ' (REFLECTIONS ON LANG, p. 59) and that 'we may *usefully* think of the [language faculty as] *analogous* to the heart or the visual system or the system of motor coordination and planning' (RULES & REPS, p. 39). So Chomsky clearly is thinking of language as something like an organ in a physiological, but not narrowly anatomical, sense. Steve Pinker has provided an interesting argument why the language faculty is *not* confined to one area of the brain. He notes that hips and hearts as 'organs that move stuff around in the physical world' have to have cohesive shapes. But the brain, as 'an organ of computation', needs only connectivity of neural microcircuitry to perform its specific tasks Q there is no reason that evolution would have favored each task being confined to one specific center. So, to conclude, AS is an empirical hypothesis and one which, I am sure, can be productively debated on Funknet. But it is important to focus on those questions that bear on its adequacy and to put aside irrelevant or tangential issues. --Fritz Newmeyer From jaske at ABACUS.BATES.EDU Thu Jan 9 22:30:03 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Thu, 9 Jan 1997 17:30:03 -0500 Subject: autonomous syntax Message-ID: I guess I should let Matthew Dryer answer this, but my impulsive nature prevents me from doing that. Frederick Newmeyer wrote: > > A brilliant demonstration to this effect (and hence support for AS) has > been provided by Matthew Dryer in an article in LANGUAGE. Dryer shows that > the underlying generalization governing the Greenbergian word order > correlations is not a semantic one (e.g. head-dependent relations or > whatever), but rather the *principal branching direction* of phrase > structure in the language. The two often are in accord, of course; where > they conflict it is the abstract structural relationships provided by > formal grammar that win out. In an in-preparation work, I argue that this > is the norm for language. Yes, the fit between surface form and meaning is > quite close. But yes, also, formal patterning has a 'life of its own', as > is asserted by AS. I reall fail to see how Dryer's generalization in any way shows anything about the autonomy of grammar. The relative degree to which languages display a consistent branching direction in a number of centripetal constructions can indeed be accounted by at least the following two facts: (1) diachronically: from the fact that certain types of complements, such as genitives, are often the source of other types of complements, such as relative clauses and adpositional constructions. (2) A 'relator-in-the-middle' iconic principle: there may be a preference for "relators" or function morphemes expressing the relation between heads and their complements for instance, such as adpositions, to be placed between the two elements related. Hence, postpositional phrases tend to precede the noun they complement and prepositional ones follow it. The former are left branching and the latter right branching. This *may* influence certain choices that speakers make diachronically leading to certain structural preferences in the system of constructions of a language. If there was anything like autonomy in grammar and Dryer's principle was a formal, even innate principle (subject to parametric variation), I would expect languages to be much more consistent than they are. The fact is that the constructions of a language are not all cut out of a same pattern at a synchronic level but, rather, are all more or less independent of each other (although they do seem to form a *system* of sorts). Many of these constructions are not even centripetal, and thus, the notion of branching is irrelevant in them. But the real question when it comes to autonomy vs. non-autonomy, is whether the constructions of a language can, or should, be described independently of the semantic and pragmatic meanings which they are used to express, and independently, for instance, of the iconic and universal principles, such as topic-comment or comment-topic, on which they are sometimes based. I don't think they can and I don't think they should. The simple reason for this is that I do not think that that is how humans learn or store the constructions of a language. Form is always stored and intimately connected to function and that is how it should be described and analyzed. I hope I didn't forget anything. Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Zeinek bera nolako, besteak uste halako "Everyone believes that everyone else is like them." From nick at STL.RESEARCH.PANASONIC.COM Fri Jan 10 00:01:48 1997 From: nick at STL.RESEARCH.PANASONIC.COM (Nicholas Kibre) Date: Thu, 9 Jan 1997 16:01:48 -0800 Subject: autonomy, etc. Message-ID: Ultimately, it seems that the autonomous syntax and and functionalist/cognitive position are more edges of a continuum than strictly opposing viewpoints. Nearly veryone agrees that language is shaped both by innate cognitive mechanisms, at least partially specialized for linguistic function, and by the demands of usage; our only point of disagreement is how tightly the former constraints the range of possible systems, and how much regularity is due to the pressures of the latter. What is striking is that, although almost everyone in this field seems to have strong feelings about what precise point along this spectrum is optimal, fairly little research really seems to address this issue empirically. Neurology may eventually be able to answer this question, but it may be a long wait! Ultimately, unless the issue is addressed directly, no amount of discussion between different camps is likely to convince anyone to change their mind. Currently, different linguists use different types of explanations, I think largely out of preference. I think if we want to resolve things more concretely, we need to think more about what the implications of claiming that a certain regularity is innate or functionally motivated would be. This is not meant to claim that I know what these implications would be. Any thoughts? Nick Kibre Btw: Thanks to all who contributed to my email corpus! ---------o--- Nicholas Kibre /'nihkahlahs 'kayber/ / Research Linguist/Speech Programmer / Panasonic Speech Technology Laboratory __=========__ 805 687 0110 xt 230 | |_|_|_|_| | nick at stl.research.panasonic.com |_|_______|_| --o=o---o=o-- http://humanitas.ucsb.edu/depts/linguistics/grads.html#kibre From ellen at CENTRAL.CIS.UPENN.EDU Fri Jan 10 03:46:58 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Thu, 9 Jan 1997 22:46:58 EST Subject: autonomous syntax In-Reply-To: Your message of "Thu, 09 Jan 1997 17:30:03 EST." <32D5716B.5322@abacus.bates.edu> Message-ID: jon aske wrote: >But the real question when it comes to autonomy vs. non-autonomy, is >whether the constructions of a language can, or should, be described >independently of the semantic and pragmatic meanings which they are used >to express, and independently, for instance, of the iconic and universal >principles, such as topic-comment or comment-topic, on which they are >sometimes based. > >I don't think they can and I don't think they should. The simple reason >for this is that I do not think that that is how humans learn or store >the constructions of a language. Form is always stored and intimately >connected to function and that is how it should be described and >analyzed. well, i guess it all depends on where you're looking. for quite a few years now i've been looking at cases of language contact where the discourse functions associated with a syntactic form in one language come to be associated with an 'analogous' syntactic form in a contact language (and where the analogy is statable in purely syntactic terms) and where the two forms in question may have originally had totally unrelated discourse functions. in fact, it is precisely by studying such cases that i have come to believe in autonomous syntax, since, if the form-function connection were permanent or driven by iconicity, i could simply not begin to explain the data. From bralich at HAWAII.EDU Fri Jan 10 06:34:15 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Thu, 9 Jan 1997 20:34:15 -1000 Subject: autonomous syntax Message-ID: At 05:46 PM 1/9/97 -1000, Ellen F. Prince wrote: >well, i guess it all depends on where you're looking. for quite a few >years now i've been looking at cases of language contact where the >discourse functions associated with a syntactic form in one language >come to be associated with an 'analogous' syntactic form in a contact >language (and where the analogy is statable in purely syntactic terms) >and where the two forms in question may have originally had totally >unrelated discourse functions. in fact, it is precisely by studying >such cases that i have come to believe in autonomous syntax, since, if >the form-function connection were permanent or driven by iconicity, i >could simply not begin to explain the data. This discussion of autonomous syntax has me somewhat baffled. For my thinking the discussion of whether or not syntax is autonomous is a little like asking if a skeleton is autonomous from the body it supports. Certainly, in some sense it is. But of course the body cannot survive without a skeleton and the skeleton cannot survive without the rest of the body. Thus, it is not at all autonomous. Given this rather ordinary observation, it strikes me as rather odd that there should be any discussion at all of the autonomy of syntax. It is just as autonomous to language as a skeleton is to the body. Taking either side of this issue is just missing the point and missing the reality of what language is. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From cumming at HUMANITAS.UCSB.EDU Fri Jan 10 07:20:01 1997 From: cumming at HUMANITAS.UCSB.EDU (Susanna Cumming) Date: Thu, 9 Jan 1997 23:20:01 -0800 Subject: No subject Message-ID: I'd like to reiterate here my original point, which seems to have gotten lost in the shuffle: is there any point in attempting sentence-level parsing? The issue is not so much the separation of syntactic analysis from semantic analysis (though I am certainly not in favor of that either, and I fully agree with Matthew's points on that topic), but the separation of linguistic analysis at any level from goal-driven, multi-functional, socially-enacted communicative context. In my view this is what crucially separates functionalists on the one hand from cognitivists on the other, or if you prefer discourse functionalists from cognitive functionalists: discourse folks believe that language removed from its communicative setting is sufficiently different from "real" communicative language that there's not much point in studying it, because you don't know what you've learned about real language when you're finished. If you take this point seriously there isn't much difference between the cognitivists and the formalists, since they are both (with some noble exceptions) content to base their analyses on "unnatural" data. In other words it's not "autonomy" that's the main problem, it's the "competence-performance" dichotomy. Susanna From dryer at ACSU.BUFFALO.EDU Fri Jan 10 08:17:41 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Fri, 10 Jan 1997 03:17:41 -0500 Subject: Reply to Fritz Newmeyer Message-ID: Different people use the expression "autonomous syntax in different ways, and these differences are often a source of confusion, as I think they have been in the current discussion. This medium is not very suitable for straightening these things out; but, very briefly, the expression is used for at least the following views: (1) people are born with innate syntactic knowledge that drives acquisition and explains universals (2) one can explain syntactic facts in terms of syntactic notions (3) syntax/grammar exists (although that too can mean different things) Arguments for the autonomy of syntax (such as some offered in print by Fritz Newmeyer) often involve no more than arguments for (3). For me (and I assume that this was what both George and Tom meant), rejecting autonomy of syntax involves rejecting (1) and (2). As for Fritz' claim that my evidence that the word order correlations involve branching direction rather than head position provides an argument for the autonomy of syntax, I would argue the opposite. Those who assume that the correlations reflect head position generally treat consistent head position as an explanation in itself. For example, a common position among formal linguists most closely aligned with Chomsky is that there is some sort of head-position parameter that is part of innate knowledge. Conversely, I have suggested that the tendency towards consistent branching direction reflects (in addition to grammaticization factors) parsing problems associated with mixed branching, i.e. "performance" problems extracting the intended meaning. If this view is correct, then the explanation lies in the nature of human working memory, and thus is inconsistent with notions (1) and (2) of autonomous syntax. Matthew Dryer From ellen at CENTRAL.CIS.UPENN.EDU Fri Jan 10 15:04:22 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Fri, 10 Jan 1997 10:04:22 EST Subject: autonomous syntax In-Reply-To: Your message of "Thu, 09 Jan 1997 20:34:15 -1000." <2.2.16.19970108203621.4b674888@pop-server.hawaii.edu> Message-ID: "Philip A. Bralich, Ph.D." writes: >This discussion of autonomous syntax has me somewhat baffled. For my >thinking the discussion of whether or not syntax is autonomous is a little >like asking if >a skeleton is autonomous from the body it supports. Certainly, in some sense >it is. But of course the body cannot survive without a skeleton and the >skeleton cannot survive without the rest of the body. Thus, it is not at >all autonomous. Given this rather ordinary observation, it strikes me as >rather odd that there should be any discussion at all of the autonomy of >syntax. It is just as autonomous to language as a skeleton is to the body. ^^^^^^^^ >Taking either side of this issue is just missing the point and missing the >reality of what language is. gee, well now THAT really clears things up, doesn't it? ;) uh, to my knowledge, no one has ever claimed that syntax is autonomous from *language*. shame, because it would be a claim that everyone from san diego to cambridge could agree to reject... :) From ellen at CENTRAL.CIS.UPENN.EDU Fri Jan 10 15:17:09 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Fri, 10 Jan 1997 10:17:09 EST Subject: No subject In-Reply-To: Your message of "Thu, 09 Jan 1997 23:20:01 PST." Message-ID: Susanna Cumming writes: >In my view this is what crucially separates functionalists on the one hand >from cognitivists on the other, or if you prefer discourse functionalists >from cognitive functionalists: discourse folks believe that language >removed from its communicative setting is sufficiently different from >"real" communicative language that there's not much point in studying it, >because you don't know what you've learned about real language when you're >finished. If you take this point seriously there isn't much difference >between the cognitivists and the formalists, since they are both (with >some noble exceptions) content to base their analyses on "unnatural" data. > >In other words it's not "autonomy" that's the main problem, it's the >"competence-performance" dichotomy. i think you're confusing what one takes to be the data and what one's ultimate theory looks like. there are those (incl me) that base their analyses on naturally-occurring data but that may wind up concluding that syntax is autonomous from meaning. in fact, i'd add that, if the choice of type of data locks one in to a particular conclusion, the actual research would seem pretty pointless... From lmenn at CLIPR.COLORADO.EDU Fri Jan 10 15:22:37 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Fri, 10 Jan 1997 08:22:37 -0700 Subject: your mail In-Reply-To: Message-ID: I'd like to take issue with the idea that syntax or lexicon can be studied only in natural contexts as much as with the idea that it can be properly studied without looking at those contexts. No biologist would argue either that test-tube studies are useless or that field studies are useless; the problems to be solved are so difficult that on the one hand, aritficially-controlled situations are needed to get a handle on what the basic processes might be, but on the other, field studies are needed to decide which of the processes that take place in the lab are actually operating in the real world. And in between the test-tube and the field are all sorts of intermediate experimental levels. In linguistics we similarly need a full spectrum of approaches; for the last several years, i've been working with colleagues on one type of experimental functional linguistics, using descriptions of minimal-pair sets of pictures to look at effects of empathy and inferrabilty of information (posters at LSA 1995 and 1997, paper in press, Brain and Language); Russ Tomlin has a more-controlled experimental approach with his fish videos. In the other direction, less-controlled but by the same token more natural, is the major 'Frog-story' work on narratives (Berman & Slobin), using a pictured story, and of course Chafe's Pear Stories. Lise Menn From jaske at ABACUS.BATES.EDU Fri Jan 10 16:06:59 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Fri, 10 Jan 1997 11:06:59 -0500 Subject: autonomous syntax Message-ID: Ellen F. Prince wrote: > > well, i guess it all depends on where you're looking. for quite a few > years now i've been looking at cases of language contact where the > discourse functions associated with a syntactic form in one language > come to be associated with an 'analogous' syntactic form in a contact > language (and where the analogy is statable in purely syntactic terms) > and where the two forms in question may have originally had totally > unrelated discourse functions. in fact, it is precisely by studying > such cases that i have come to believe in autonomous syntax, since, if > the form-function connection were permanent or driven by iconicity, i > could simply not begin to explain the data. That is very interesting and I would like to know more about your specific cases, but my experience with language contact and grammatical change, though it sounds similar to yours, has led me to very different conclusions. I have found that Basque seems to be increasing the number of clauses with postverbal elements, and the way it seems to be happening is that minor, marked (and ?optional?) constructions which, for a variety of reasons, place the verb in rheme-initial position, and thus superficially look like the unmarked constructions of Romance languages, are being used more and more by those speakers which are most "under the influence" of a Romance language. The ?overuse? of these constructions results in a change in the contexts in which these constructions are used, i.e. in the pragmatics of those constructions. In other words, the constructions are becoming relatively less marked than we would expect them to be. This, of course is the well known phenomenon of convergence, a type of transfer that is rather common in language contact situations. Thus syntactic change is really pragmatic change in the constructions. As I see it, these pragmatics are not drafted onto otherwise formal constructions as an afterthought, but are an intrinsic part of them. The constructions in question do not make sense without reference to functional categories and the ordering relations are extremely iconic. Anyway, this is probably more than what anybody wanted to know, but since I don?t have anyone to talk to about my work in my exile, I thought I?d share it with you. (My LSA 97 paper on this very topic can be seen at http://www.bates.edu/~jaske/askeling.html). Best wishes, Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Balantza duen aldera erortzen da arbola "The tree falls towards the side it's leaning." From dever at VERB.LINGUIST.PITT.EDU Fri Jan 10 16:26:42 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Fri, 10 Jan 1997 11:26:42 -0500 Subject: Autonomous Syntax & Research In-Reply-To: <01IDWK5CL2KI935AF8@OREGON.UOREGON.EDU> Message-ID: Folks, I agree with Fritz on the idea of autonomous syntax, which won't surprise you, but of course it is a hypothesis, subject to empirical testing like any other. It has stood the test of time pretty well, though. But I am writing on something else, namely, what autonomous syntax has to do with research methodology, which is really how I interpret a lot of the comments on looking at texts. In my opinion the view one holds on the autonomous syntax thesis ought, in most cases at least, to have very little impact on methodology. Data should come from natural text *and* isolated sentences. In my fieldwork, I rely crucially on both. Usually, I take sentences from natural texts and study them in isolation (i.e. creating paradigms based on them and checking them with a variety of native speakers) after I have analyzed their role in the text from which they are extracted. But occasionally I need to look for aspectual (etc.) combinations that are rare or nonexistent (and whether or not they are nonexistent is often what I am trying to figure out). In these cases I put on what might look like little stage plays, working with various informants (I usually only do fieldwork in monolingual situations, having a bilingual informant is a luxury I have rarely had) to get at the examples (or not) that I am looking for. It is true that a lot of work in formal linguistics has been methodologically inferior to work done in functional linguistics, so if I were a functionalist, I might count that against formal approaches. But bad practice does not make bad theoretical assumptions, just, perhaps, bad theoretical results. There is no justification for this whatsoever and as a formal linguist, I am sorry that we have lagged behind (although notable exceptions come to mind, such as Ken Hale). -- DLE P.S. Carson Schutze has recently published a book on methodology (basically for formal linguists), which hopefully will contribute to change in formal theoretic work. ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From ellen at CENTRAL.CIS.UPENN.EDU Fri Jan 10 16:45:37 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Fri, 10 Jan 1997 11:45:37 EST Subject: autonomous syntax In-Reply-To: Your message of "Fri, 10 Jan 1997 11:06:59 EST." <32D66923.ED1@abacus.bates.edu> Message-ID: Jon Aske wrote: >That is very interesting and I would like to know more about your >specific cases, but my experience with language contact and grammatical >change, though it sounds similar to yours, has led me to very different >conclusions. > >I have found that Basque seems to be increasing the number of clauses >with postverbal elements, and the way it seems to be happening is that >minor, marked (and ?optional?) constructions which, for a variety of >reasons, place the verb in rheme-initial position, and thus >superficially look like the unmarked constructions of Romance languages, >are being used more and more by those speakers which are most "under the >influence" of a Romance language. > >The ?overuse? of these constructions results in a change in the contexts >in which these constructions are used, i.e. in the pragmatics of those >constructions. In other words, the constructions are becoming >relatively less marked than we would expect them to be. This, of course >is the well known phenomenon of convergence, a type of transfer that is >rather common in language contact situations. Thus syntactic change is >really pragmatic change in the constructions. As I see it, these >pragmatics are not drafted onto otherwise formal constructions as an >afterthought, but are an intrinsic part of them. The constructions in >question do not make sense without reference to functional categories >and the ordering relations are extremely iconic. wow, THAT is very interesting to me! i will d/l your paper and read it with interest -- but, from my understanding of what you say here, it sounds like what i've been finding: a particular form in one language (here the basque form with the verb in 'rheme-initial' position -- which i'm assuming is describable syntactically, without reference to 'rheme'?) is 'matched up' with an analogous form in another language (here canonical order in french), with the discourse function of the 'matchee' coming to be associated with the original form (here the LACK of discourse function of french canonical order bumping out the df of the basque form). i would venture that it is this lack of df that accounts for the increased frequency, not vice versa -- after all, a substantive discourse function constrains what contexts a form may felicitously occur in; an 'unmarked' form can occur in any context and should thus have a higher frequency. (this then would provide a different database to the children acquiring the language in terms of frequency, which may then result in their hypothesizing a somewhat different grammar.) if that is a reasonable description of what you've found, then i'd think you'd agree that form and function are NOT inextricably combined... otherwise how could a form ever change its function -- and how could analogous forms with different functions ever be seen as analogous by speakers? thanks for the post and i look forward to reading your paper! From fjn at U.WASHINGTON.EDU Fri Jan 10 17:04:27 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Fri, 10 Jan 1997 09:04:27 -0800 Subject: Reply to Fritz Newmeyer In-Reply-To: Message-ID: I find myself in the awkward and undesirable position of telling Matthew Dryer that he doesn't appreciate the implications of his own ground-breaking work. The sensitivity of speakers to the abstract (formal, structural) notion 'Phrase structure branching-direction', a notion that doesn't (as Dryer shows) correlate perfectly with semantic notions such as 'head-dependent' supports the idea that speakers mentally represent abstract phrase-structure relations INDEPENDENTLY of their semantic and functional 'implementations'. That is, it supports the autonomy of syntax. Matthew writes: > As for Fritz' claim that my evidence that the word order > correlations involve branching direction rather than head position > provides an argument for the autonomy of syntax, I would argue the > opposite. Those who assume that the correlations reflect head > position generally treat consistent head position as an explanation > in itself. For example, a common position among formal linguists > most closely aligned with Chomsky is that there is some sort of > head-position parameter that is part of innate knowledge. Yeah, but those people are wrong. As Dryer has shown, it's branching direction, not head position, that is relevant. My view is that branching direction is a MORE hard core 'formal' notion than head position, the determination of which always seemed to involve semantic considerations. And note that I never said anything about 'innateness' in this context. The question of the autonomy of syntax and that of innate syntactic principles have to be kept separate. The 'autonomy of syntax' is a fact about mature grammars. What the language learner draws upon to construct that grammar is a separate issue. So while innate syntactic principles entail (I think) autonomous syntax, autonomous syntax does not ential innate syntactic principles. Matthew goes on to write: > Conversely, I have suggested that the tendency towards consistent > branching direction reflects (in addition to grammaticization > factors) parsing problems associated with mixed branching, i.e. > "performance"problems extracting the intended meaning. If this > view is correct, then the explanation lies in the nature of human > working memory, and thus is inconsistent with notions (1) and (2) of > autonomous syntax. I agree that there is a functional explanation (parsing-based) for why speakers prefer consistent branching direction. That doesn't challenge the autonomy of syntax, since AS is a claim about what speakers mentally represent, not what they 'prefer'. In other words, functional explanation is perfectly compatible with AS. Let me give a chess analogy. Nobody could deny that the rules of chess (pieces, possible moves, etc.) form an autonomous system. But functional factors could have (indeed, surely did) enter into the design of the system. A ruling from the International Chess Authority could change the rules (resulting in a different, but still autonomous, system). Furthermore, when playing a game we have a choice as which pieces to play, which moves to make. Syntax, then, is autonomous in very much the same way that chess is autonomous. We mentally represent an autonomous system. Why that system has the properties that it has is another question. One answer is, as Matthew points out, is pressure from the parser. Another is probably pressure for iconic representations (see my paper in LANGUAGE of a few years ago.) Another may be innate syntactic principles. By the way, the most compelling, in my view, parsing explanation for Dryer's generalzation is Jack Hawkins' principle of 'Early Immediate Constituents' (see his book A PERFORMANCE THEORY OF ORDER AND CONSTITUENCY). Hawkins is absolutely explicit that parsing explanations are compatible with autonomy; ideed, he sees the former as a partial explanation for the latter. Hawkins goes on to write: "More generally, the very autonomy of syntax may owe its existence to the need to make semantics processable for both the speaker and the hearer, and it remains to be seen whether any precision can be given to the formula: semantics + processing = syntax." (Hawkins 1994: 439) While his formula (as he would I am sure agree) is too simple, I basically agree with him. --fritz newmeyer From dquesada at CHASS.UTORONTO.CA Fri Jan 10 18:02:23 1997 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Fri, 10 Jan 1997 13:02:23 -0500 Subject: Chess and Syntax In-Reply-To: Message-ID: On Fri, 10 Jan 1997, Frederick Newmeyer wrote: > Nobody could > deny that the rules of chess (pieces, possible moves, etc.) form an > autonomous system. But functional factors could have (indeed, surely did) > enter into the design of the system. A ruling from the International Chess > Authority could change the rules (resulting in a different, but still > autonomous, system). Furthermore, when playing a game we have a choice as > which pieces to play, which moves to make. > > Syntax, then, is autonomous in very much the same way that chess is > autonomous. We mentally represent an autonomous system. The analogy is not that felicitous. Fritz overlooks a crucial aspect of these two putatively autonomous games [it's revealing that an analogy was drawn from a game; indeed formal linguistics sometimes seems no more than an intelectual excercise for the sake of entertainment, but that's another disk], namely that in order for every piece to move, and how to move it, one needs to know what is it, in linguistic terms this means that we need to know what the MEANING of the combining element is; otherwise one could have the horse move diagonaly, the tower jump in all directions and so on, just as constitutents could be shifted around irrespective of what they mean. As G. Lakoff (if I remember correctly) put it and as -I assume- all Funknetters think, as long as any combination is determined by semantic content there can be no autonomy. As for innateness (of either the so-called UG, or autonomous syntax), I don't think we are explaing much by clinging to that, which is, at its best, a truism: it all boils down to saying that language is exclusive of humans. That might have been perceived as a revolutioary time bomb back in 1957 in Skinner-influenced linguistics. Nowadays it is simply a trivial fact as saying that all languages have vowels. It happens that many of the so-called "hypotheses" in generative grammar turn out, on chronological account, to have been "patches" to objections; for instance, the so-called mentalism and innateness. Givon (1984: 7) has already pointed out this when he says that "Whether the particular mix and its coherence or lack thereof were the product of design or accident is still a matter of debate". I guess he was trying to be diplomatic... J. Diego Quesada University of Toronto From chafe at HUMANITAS.UCSB.EDU Fri Jan 10 18:38:04 1997 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Fri, 10 Jan 1997 10:38:04 -0800 Subject: Another view Message-ID: Try looking at it this way. A language is fundamentally a way of associating meanings with sounds (and/or some other symbolic medium). Meanings (let's not get hung up on the term) are mixtures of cognitive, emotive, interactive, and intratextual information, covering all facets of human experience. A language imposes on experience a huge, complicated set of meaning elements and ways of combining them, just as it imposes sound elements and ways of combining them. Much of this is language- particular, but some is universal for various reasons, only one of which may be innateness. One might be able to imagine a language in which meanings were associated with sounds in a direct, unmediated way, but no real language is like that, and the reason is that languages change. In all languages grammatic(al)ization, lexicalization, and the analogic extension of patterns has produced situations in which functionally active meanings are often symbolized in the first instance by partially or wholly fossilized formerly functional elements and combinations, whose own associations with sounds may nevertheless remain intact. (I say partially fossilized because sometimes there is leakage back into at least semiactive consciousness, as with awareness of the literal meanings of idioms and metaphors.) One linguist may look at this situation and say, "Aha, there's a lot here that is arbitrary and nonfunctional. Hurrah for autonomous syntax!" Another linguist may look at the same situation and say, "There's a lot here that is motivated, and when it seems not to be, and when we're lucky enough to know something about how it got to be this way, we can see that there once was a motivation that has now been obscured by grammaticalization etc." My own opinion is that we ought to be looking for functional motivations wherever we can find them, and that an autonomous syntax based on elements that have never had anything but a formal, otherwise unmotivated status provides nothing more than a way of feeling happy about a failure to probe toward a deeper understanding, including in many cases a historical one. There seem to be three major ways in which spinners of theories connect with reality. One is through observing how people actually talk, one is through doing experiments, and one is through inventing isolated sentences and judging their grammaticality. Each has its advantages and disadvantages, and improved understanding ought to come from a judicious mixture of the three (as just emphasized by Lise Menn), though unfortunately we are all biased by training and experience to do mainly one, and sometimes even sneer at the others. It may have relevance to this debate that there has indeed been a correlation between the autonomous syntax approach and the use of grammaticality judgments. It looks, too, as if those who observe how people actually talk tend on the whole to be the least enchanted by the autonomous approach. Dan Everett's remarks on this score are particularly welcome, however. As for parsing, where this discussion began, it's useful in illuminating some of the patterns that exist in the intermediate area between meanings and sounds. But whether those patterns form an intact skeleton that can be studied apart from the meat attached to it has always seemed to me, at least, quite dubious. In any case, if a machine were ever truly to understand something that was said to it, its understanding would have to be in cognitive, affective, and social terms--in terms of all facets of human experience--which lie quite beyond anything presently available in the computer world. Wally Chafe From jaske at ABACUS.BATES.EDU Fri Jan 10 18:57:53 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Fri, 10 Jan 1997 13:57:53 -0500 Subject: Reply to Fritz Newmeyer Message-ID: First of all, I agree with Matthew's interpretation of his facts, and disagree with Fritz's reinterpretation. But I don't think either one will convince the other. About parsing and branching preferences, I think Jack Hawkins' study is somewhat artificial and doesn't reflect realistically what actual language in use is like, and that it ignores many other aspects of language, such as the use of intonation to disambiguate structures and the average length of clauses, which is actually quite short in 99.99% of clauses in actual speech. If branching was such an important motivation in determining the form of constructions, we would expect a majority of languages, if not all, to be right branching for example. I think a major problem here involves what we take syntax/grammar to be, not just whether it is autonomous from other things. In my book, grammar is a set of relatively independent, and relatively interdependent constructions (ie a "leaky system" of constructions). Semantics, pragmatics, and processing and other cognitive constraints, have a lot to do with the form of those constructions, particularly how they come about diachronically. I am not sure, however, that all of these diachronic motivations are equally relevant synchronically, that is in the interpretation that speakers make of those constructions. A major problem with autonomous approaches, as I see it, is that (1) they attempt to explain synchronically (ie as part of the internalized grammar) formal correlations which are not synchronically real (such as branching direction, active and passive constructions, order of affixes, etc.), but which stem from diachronic sources, thus "recapitulating diachrony", and (2) that they do not attempt to explain actual iconic correlations between form and function as found in many constructions, particularly speech act constructions. The pragmatic motivations which are often grammaticalized into constructions do not simply vanish once they have eft their imprint on those constructions, as Fritz has argued elsewhere, but I think they are a very important part of how speakers interpret, store, and use those constructions. Anyway, that's more than enough for today. I'll be delighted to hear any responses anyone may have to these thoughts. Jon Frederick Newmeyer wrote: > > and it remains to be seen whether any precision can be given to the > formula: semantics + processing = syntax." (Hawkins 1994: 439) > > While his formula (as he would I am sure agree) is too simple, I basically > agree with him. -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Balantza duen aldera erortzen da arbola "The tree falls towards the side it's leaning." From bralich at HAWAII.EDU Fri Jan 10 20:08:47 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Fri, 10 Jan 1997 10:08:47 -1000 Subject: Another view Message-ID: At 08:38 AM 1/10/97 -1000, Wallace Chafe wrote: > >As for parsing, where this discussion began, it's useful in illuminating >some of the patterns that exist in the intermediate area between >meanings and sounds. But whether those patterns form an intact >skeleton that can be studied apart from the meat attached to it has >always seemed to me, at least, quite dubious. I am sure this is true for medicine as well, studying a skeleton removed from a body tells you very little about its interaction with the body, but we are missing a pretty significant area of study if we do not have orthopedics. The usefulness and meaningfulness of a skeleton that has been removed from its body is similar to the usefulness and meaningfulness of a syntax that has been removed from its larger functional setting. We can no more get rid of autonomous syntax or functional syntax than we can wholistic medicine or orthopedics. There is simply no debate here. We as linguists cannot ignore either one of these realities whether or not we choose to specialize in syntax (orthopedics) or wholistic medicine (functional grammar). >In any case, if a machine >were ever truly to understand something that was said to it, its >understanding would have to be in cognitive, affective, and social >terms--in terms of all facets of human experience--which lie quite >beyond anything presently available in the computer world. But from the point of view of machines understanding language. We are 50 - 100 years away from that. Let's not insist on jet engines when we still haven't worked the bugs out of hot air balloons. For now, we can get computers to respond to significantly more lanugage by coupling a completely worked syntactic analysis with the orthography. This will take us a step toward machine's understanding. But first, let's do a full and proper analysis of small and medium sentences before we proceed to other areas. Everyone looking at parsing seems to want to begin with machines that are as fully capable of language as are humans. This is a mistake. Let's take the state of the art as it is and begin there. Let's not insist that medicine cure aids before we work on cuts and scratches and lets not insist on full understanding before we work on parsers. Let's also not be fooled by the state of the art. Insist on clear demonstrations of what is and is not possible with a parser, and then let's see where we can go from there. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From TGIVON at OREGON.UOREGON.EDU Fri Jan 10 20:24:02 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Fri, 10 Jan 1997 12:24:02 -0800 Subject: back and forth Message-ID: Seems to me I hear another argument between the partly deaf. Everybody concede that a correlation exists between grammatical structures and semantic and/or pragmatic functions. But two extremist groups seem to draw rather stark conclusions from the fact that the correlation is less-than-fully-perfect. The "autonomy" people seem to reason: "If less than 100% perfect correlation, therefore no correlation (i.e. 'liberated' structure)" The grammar-denial folks, all of them card-carrying functionalists, seem to reason: "If less than 100% generative and autonomous, therefore 100% functionally motivated" Now notice that both use a very similar Platonic reasoning: STRUCTURALISTS: "Something is functionally motivated only if its is 100% so" ANTI-GRAMMFUNK: "Something cannot have independently-manifested existence unless it is 100% so" This is of course a silly argument, and that's why people keep repeating the same reductionist extreme assertions again and again and again. The facts suggests that neither extreme positions could be right. Grammar is heavily motivated by semantic/communicative functions. But -- because of grammaticalization and what Haiman calls 'ritualizatio' it never is o100% so. It acquires a CERTAIN DEGREE of its own independent life. This, however, does not mean 100% autonomy. And by the way, along the diachronic cycle of grammaticalization, you can see a construction changing its degree of iconicity. So that overall in the aggregate of a synchronic grammar at any diachronic point, you can show constructions that are much better correlated to their functions and some that are much less so. To those of you who know something about biology (and sorry Fritz and Elen, I don't cont you among those...), this story of course looks rather familiar. It also looks familiar in respect to another topic Ellen raised (without calling it by its rightful name -- that of cross-language typological vartiation. Yes, we do see the same communicative function coded in different languages coded by different grammatical constructions. That is because there is not only one way to perform a function, there are alternative choices. Think about the function of AMBULASTION. There are four major types bio-organisms seem to perform it: Walking, slithering, flying and swimming. And among each of those there are minor sub-types. And each one of those is associated wiuth its own -- highly adapted, rather specific -- correlated structures. Now, does that mean that structure is independent from function? Get real. Bloomfield and his cohorts thought that cross-linguistic typological variability suggested structures were 100% unconstrained by meaning. But most serious typologists are keenly aware of how CONSTRAINED is the range of syntactic types (major ones) that can perform the same communicative functions. IN REL-clauses, for example, I have not been able to find more that 5-6 major types. In passivization (impersonal agent) maybe 4-5, etc. etc. This extreme paucity sould be appreciated against the vast number of mathematically-possible types. The idea that 'universality' means 100% universality is another version of reductionist Platonic thinking. Species, biological populations, are defined by evolutionary biologists (see Futuyma's text, eg) as A CURVE OF DISTRIBUTION OF VARIANTS. These guys will tell you that VARIATION GUARANTEES EVOLUTION. So the pernicious idea that somehow universality demands 100% uniformity is really a bit primitive. As is the idea that because there is more than one way of skinning a cat, ways of skinning are not closely dependent upon the task at hand -- skinning a cat. But of course, Aristotle in his brilliant rejection of structuralism in Biology already said all that. One sometimes wonders why after 2300 years it seems nobody is listening. Y'all be good y'hear. TG From fjn at U.WASHINGTON.EDU Fri Jan 10 21:47:24 1997 From: fjn at U.WASHINGTON.EDU (F. Newmeyer) Date: Fri, 10 Jan 1997 13:47:24 -0800 Subject: Chess and Syntax In-Reply-To: Your message of Fri, 10 Jan 1997 13:02:23 -0500 (EST) Message-ID: Diego Quesada writes: > The analogy is not that felicitous. Fritz overlooks a > crucial aspect of these two putatively autonomous games > [it's revealing that an analogy was drawn from a game; > indeed formal linguistics sometimes seems no more than > an intelectual excercise for the sake of entertainment, > but that's another disk], namely that in order for every > piece to move, and how to move it, one needs to know > what is it, in linguistic terms this means that we need > to know what the MEANING of the combining element is; > otherwise one could have the horse move diagonaly, the > tower jump in all directions and so on, just as > constitutents could be shifted around irrespective of > what they mean. As G. Lakoff (if I remember correctly) > put it and as -I assume- all Funknetters think, as long > as any combination is determined by semantic content > there can be no autonomy. Saying that the 'meaning' of a rook is the ability to move in a straight line harkens back to the crudest use / instrumentalist theories of meaning that were rejected by virtually all linguists and philosophers of language decades ago and are *surely* rejected by 'cognitive linguists'. It reminds me of things that people used to say long ago like 'the meaning of stops in German is to devoice finally' or 'the meaning of the English auxiliary is to front in questions'. So, as far as I can see, my chess analogy still holds (at the level of discussion). --fritz newmeyer From lmenn at CLIPR.COLORADO.EDU Fri Jan 10 21:32:57 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Fri, 10 Jan 1997 14:32:57 -0700 Subject: autonomous syntax In-Reply-To: <32D66923.ED1@abacus.bates.edu> Message-ID: Ann Peters gave me a very nice analogy for the diachronic relationship between form and function: it's like two legs loosely hobbled together - they have a certain amount of independence, so can go off in separate directions, but not very far before one pulls on the other. Lise Menn From jaske at ABACUS.BATES.EDU Fri Jan 10 23:12:31 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Fri, 10 Jan 1997 18:12:31 -0500 Subject: back and forth Message-ID: I think Tom is right on target. All this reminds me of an interesting quote in a recent interview with Ray Jackendoff, which suggests that the pursuit of autonomy was not really inevitable. It resulted really from not being able to come up with a way of making the form-function connection work right. He said: "if you look back at Syntactic Structures, Chomsky said, "Semantics is semi-systematically connected with syntax - systematically enough that we want to account for it, but not systematically enough that we can use it as a key to determine how the syntax works. We have to do the syntax autonomously." His program was to show we have to (and can) do the syntax autonomously. And he never really worried about a systematic connection to semantics. I think it was really Katz and Postal who forced him to it"" (Ray Jackendoff, in conversation with John Goldsmith; in Huck and Goldsmith 1995:98-99). -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Balantza duen aldera erortzen da arbola "The tree falls towards the side it's leaning." From dquesada at CHASS.UTORONTO.CA Sat Jan 11 01:08:54 1997 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Fri, 10 Jan 1997 20:08:54 -0500 Subject: Chess and Syntax In-Reply-To: Message-ID: On Fri, 10 Jan 1997, F. Newmeyer wrote: > Saying that the 'meaning' of a rook is the ability to move in a straight line > harkens back to the crudest use / instrumentalist theories of meaning that were > rejected by virtually all linguists and philosophers of language decades ago and > are *surely* rejected by 'cognitive linguists'. It reminds me of things that > people used to say long ago like 'the meaning of stops in German is to devoice > finally' or 'the meaning of the English auxiliary is to front in questions'. Indeed, but, who is saying that? I reiterate that the meaning of a constituent DETERMINES (in the strong version) or CONDITIONS (in the weak version) both its combinatorial properties and the functions it can perform in a language L. We thus come back to the present where, after decades of mechanicism, the essentials of language are given its deserved place. > So, as far as I can see, my chess analogy still holds (at the level of > discussion). Ubi supra. Diego From pesetsk at MIT.EDU Sat Jan 11 03:11:41 1997 From: pesetsk at MIT.EDU (David Pesetsky) Date: Fri, 10 Jan 1997 22:11:41 -0500 Subject: back and forth Message-ID: At 12:24 PM -0800 1/10/97, Tom Givon wrote: > Seems to me I hear another argument between the partly deaf. Everybody > concede that a correlation exists between grammatical structures and > semantic and/or pragmatic functions. But two extremist groups seem to > draw rather stark conclusions from the fact that the correlation is > less-than-fully-perfect. The "autonomy" people seem to reason: > "If less than 100% perfect correlation, > therefore no correlation (i.e. 'liberated' structure)" Do "autonomy people" really reason like this? I don't think so. In fact, I think it's just the opposite. Isn't most of the research by "autonomy people" actually devoted to the hunch that there is a nearly *perfect* 100% correlation between grammatical structure and semantic/pragmatic function -- and that "less than 100%" correlations are actually 100% correlations obscured by other factors? - What, after all, is the functional category boom about, if not a (possibly overenthusiastic) attempt to investigate a 100% correlation hypothesis for properties like tense, agreement, topic, focus, and so on? - What was the motivation for the hypothesis of "covert movement" (LF movement), if not the hunch that the correlation between grammatical and semantic/pragmatic structure is tighter than it appears? - Why all the effort expended on the unaccusative hypothesis, the Universal Alignment Hypothesis, and Baker's UTAH, if not in service of the hypothesis that non-correlations between semantic function and grammatical form are only superficial? I think one might make the case that formalist "autonomy people" are among the most faithful functionalists. What divides linguists in this debate is not, I suspect, their faith in robust form-function correlations, but rather their hunches about the repertoire of factors that *obscure* these correlations. That's where many of us really do disagree with each other. -David Pesetsky ************************************************************************* Prof. David Pesetsky, Dept. of Linguistics and Philosophy 20D-219 MIT, Cambridge, MA 02139 USA (617) 253-0957 office (617) 253-5017 fax http://web.mit.edu/linguistics/www/pesetsky.html From M.Durie at LINGUISTICS.UNIMELB.EDU.AU Sat Jan 11 07:02:57 1997 From: M.Durie at LINGUISTICS.UNIMELB.EDU.AU (Mark Durie) Date: Sat, 11 Jan 1997 18:02:57 +1100 Subject: back and forth In-Reply-To: <9701110311.AA13894@MIT.MIT.EDU> Message-ID: Like others, I cannot resist throwing in my two bits worth. 1. The 'form-meaning' terminology has become extremely confusing, because many 'formal' approaches treat (either explicity or implicitly) at least some kinds of meaning as a kind of form (e.g. Jackendoff's conceptual structure, HPSG's treatment of meaning, the examples Pesetsky refers to in his recent posting etc. etc) in which semantic elements becomes yet another part of the system of grammar for which the linguistic is seeking to explicate principles of 'well-formed-ness'. 2. In a related vein, it is also confusing to treat form-meaning and form-function as somehow equivalent wordings. The discussions have swung backwards and forwards between talking about 'function' and 'meaning'. Surely some kinds of meaning are pretty good candidates for being inside the structural system of language (i.e. having the character of 'form'), and others are obviously not. My first point reflects this. 3. Isn't Fritz's chess analogy precisely Saussure's. Saussure used the chess analogy to illustrate what he saw to be a clear-cut difference between what is 'in' langue (='grammar' in generativist terminology, I suppose) and what is out of it, and the separate nature of the internal system as such. Even Sapir described language as an 'arbitrary system'. Of course Saussure included (a certain kind of) meaning in his 'system' but then so do many formal approaches today, as I noted above. So the autonomy hypothesis in Fritz's sense (divorced from any consideration of innateness) is structuralism. Or have I completely misunderstood? Mark. ------------------------------------ From: Mark Durie Department of Linguistics and Applied Linguistics University of Melbourne Parkville 3052 Hm (03) 9380-5247 Wk (03) 9344-5191 Fax (03) 9349-4326 M.Durie at linguistics.unimelb.edu.au http://www.arts.unimelb.edu.au/Dept/LALX/staff/durie.html From s_mjhall at EDUSERV.ITS.UNIMELB.EDU.AU Sat Jan 11 10:59:45 1997 From: s_mjhall at EDUSERV.ITS.UNIMELB.EDU.AU (michael hall) Date: Sat, 11 Jan 1997 21:59:45 +1100 Subject: Arabic Message-ID: I'd like to contact anyone with an interest in Arabic linguistics. My own interests lie in systemic functional linguistics and discourse analysis, but let's face it, Arabic linguists can afford to be fussy! Whatever your angle, drop me a line. Michael Hall. From dryer at ACSU.BUFFALO.EDU Sat Jan 11 14:06:42 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Sat, 11 Jan 1997 09:06:42 -0500 Subject: Response to Newmeyer's response Message-ID: While I think that some of the recent discussion is terminological (different people are using the term "autonomy" in different ways), Fritz' response to my comments reflects a substantive difference that is fundamental to differences between functionalist and "formalist" approaches. Fritz says >>The sensitivity of speakers to the abstract (formal, structural) >>notion 'Phrase structure branching-direction', a notion that >>doesn't (as Dryer shows) correlate perfectly with semantic >>notions such as 'head-dependent' supports the idea that speakers >>mentally represent abstract phrase-structure relations >>INDEPENDENTLY of their semantic and functional >>'implementations'. If by this Fritz means that speakers represent the fact that different structures in their language involve the same branching direction, then this doesn't follow at all. My hypothesis is that languages with both left and right branching result in structures with mixed branching that in language USE are slightly more difficult to process, and that over the millenia, this has influenced language change so that one finds crosslinguistic patterns reflecting a tendency toward more consistent direction of branching. But this does not entail that the fact that different structures in the language employ the same direction of branching is itself represented by speakers. Rather, it simply means that speaking a language in which structures branch in the same direction will result in slightly fewer instances of individuals' failing to extract the intended meaning of an utterance. A common assumption of much formal work is that the explanations are built into speakers' representations of their knowledge of their language. But since functionalist explanations obtain at the level of language USE, they are not in general part of the representations themselves. Consider the following analogy from biology. Selection of combinations of features that are advantageous to survival leads to individuals with those features surviving more often. But the fact that that combination of features is advantageous to survival is not itself represented in the genetic code or in the structures that result from their being advantageous. Matthew From dever at VERB.LINGUIST.PITT.EDU Sat Jan 11 15:20:45 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Sat, 11 Jan 1997 10:20:45 -0500 Subject: Intoning biology In-Reply-To: Message-ID: Folks, I would like to find a biological basis for human language/grammar as much as the next person, but we are going to get no closer to this goal if we talk aprioristically, nor if we make pronouncements on who does and who does not have a license to practice biological reasoning (as Tom Givon is wont to do). Matthew Dryer asks us to: > > Consider the following analogy from biology. Selection of > combinations of features that are advantageous to survival leads to > individuals with those features surviving more often. But the fact > that that combination of features is advantageous to survival is not > itself represented in the genetic code or in the structures that > result from their being advantageous. > > Matthew Are you sure that the genetic code has no syntax? But we are getting ahead of ourselves here. It is true that our research on human language must be constrained by well-established parameters of biologically relevant research, if that is what we want to relate to after all is said and done (works like the new _Evolution of Communication_ by Marc Hauser are thus a service to us all). But the starting point within these parameters must be to first offer an account of the phenomena that one thinks to be in need of explanation in the specific domain, e.g. grammar. To the degree that such an account predicts other phenomena and leads to a rich network of explanada and explanans that light our path in new exploration (melodramatic metaphor, but it is Saturday morning after all) we have something worth considering. Only after we have reached this point can we begin to discuss meaningfully the biological significance or implementation of our account. None of us, not even Tom G, knows what a biological account of the facts will look like until we agree on the facts to be explained, construct accounts for them, and evaluate the relative worth of the competing accounts. The point of my previous posting on methodology was to focus on how we do research and evaluate it within our domains of interest. If we cannot agree on the empirical success of competing accounts (or even what are competing accounts) within grammar, we certainly cannot argue on which one is biologically more plausible. Autonomy of syntax has to be evaluated wrt grammatical explanations. -- DLE From edith at CSD.UWM.EDU Sat Jan 11 20:28:59 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Sat, 11 Jan 1997 14:28:59 -0600 Subject: form versus meaning Message-ID: Here is another two-bit-worth of contribution. On Thursday, January 9, Jon Aske wrote the following: "But the real question when it comes to autonomy vs. non-autonomy, is whether the constructions of a language can, or should, be described independently of the semantic and pragmatic meanings which they are used to express, and independently, for instance, of the iconic and universal principles, such as topic-comment or comment-topic, on which they are sometimes based." "I don't think they can and I don't think they should. The simple reason for this is that I do not think that that is how humans learn or store the constructions of a language. Form is always stored and intimately connected to function and that is how it should be described and analyzed." I detect a slight non-sequitur here unless a hidden assumption is made explicit. If the goal is to describe how people learn and store constructions - and provided that it is indeed true that this happens always in terms of form and meaning being inseparable - then, indeed, the forms of constructions cannot and should not be described without their meanings. If, however, the goal is not to describe HOW people learn and store constructions but, rather, WHAT it is that people learn and store, then there is nothing wrong with trying to describe form in separation from meaning. It seems, in fact, that the description of syntactic form without regard to meaning is both possible and necessary. This is shown as follows. a/ THE POSSIBILITY OF DESCRIBING FORM AND MEANING SEPARATELY That describing syntactic form only is possible is shown by the fact that, given a set of sentences from a language with word boundaries indicated but no glosses provided, one can write a syntax by specifying the distribution (i.e., cooccurrence and order patterns) of the words. In order to make the description general, one will want to lump words into classes. These classes will, by definition, be syntactic categories (rather than semantic ones) in the sense that they were arrived at on the basis of purely syntactic (=distributional) information. Once the meanings of the sentences are also considered, some of the syntac- tic categories utilized in the description may turn out to be congruent with semantic classes while others are likely not to. I think this is what the autonomouns versus non-autonomous syntax debate is all about: whether there are any syntactic classes that are not also semantic ones (which is what autonomous syntax claims) or whether all syntactically- arrived categories coincide with classes of meaning elements. - However, the existence of "purely syntactic categories" in the above sense (i.e., in the sense that they are discoverable solely on the basis of syntactic evidence) is independent of whether they also happen to be meaningful or not. The whole thing is analogous to a proverbial Martian coming to Earth and undertaking to describe traffic signs. He will be able to give an account of the signs without knowing what meanings they stand for, by simply delimiting the basic graphic symbols and stating rules of their cooccurrence and arrangement on the sign boards. Once he learns what each signs means, he will discover that some of his classes arrived at on the basis of form patterns are meaningful while others (such as a line forming a frame around the signs) are not. b/ THE NECESSITY OF DECSRIBING FORM AND MEANING SEPARATELY Apart from the fact that one syntactic form can go with more than one alternative meaning and the same meaning can be expressed by alternative syntactic forms (and apart from other strictly structural evidence regarding mismatches between meaning structures and syntactic form), it seems that a description of syntactic constructions with form and meaning separately represented is necessary also as a supplement to performance-oriented descriptions of the sort Jon Aske referred to (where what is shown is that people learn and store forms along with meanings). This is because in order for a fact to become an explanandum, we must be able to see alternatives to it - ways in which things COULD be but are not. Thus, in order for us to be able to ask "WHY do people learn and store forms with meanings?", we have to realize that form and meaning are separate things and, in principle, they could be learned and stored separate. This logically possible but empirically non-occurrent option is what the description of syntactic constructions (as opposed to the description of how such constructions are processed by people) supplies when it shows meaning and syntactic form as separate entities. ************************************************************************ Edith A. Moravcsik Department of Linguistics University of Wisconsin-Milwaukee Milwaukee, WI 53201-0413 USA E-mail: edith at csd.uwm.edu Telephone: (414) 229-6794 /office/ (414) 332-0141 /home/ Fax: (414) 229-6258 From bates at CRL.UCSD.EDU Sat Jan 11 21:10:21 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Sat, 11 Jan 1997 13:10:21 -0800 Subject: form versus meaning Message-ID: In response to Edith Moravcsik's message, I think there is one more hidden assumption: that items either are, or are not, members of a syntactic class, and if they are members, they are members to the same degree. This is a classic approach to category membership, but its psychological validity is highly questionable, after more than two decades of research on prototypicality effects, fuzzy boundaries, ad hoc categories, context-dependent categorizations, and so on. As it turns out (vis-a-vis our Martian visitor), native speakers give highly variable judgments of syntactic well-formedness depending on the relative degree of "verbiness" of a verb, "nouniness" of a noun, "really-good-subjects" vs. "not-so-great subjects", and so on. It has proven very difficult to explain these variations without invoking something about the semantic content of the item in question, and/or its pragmatic history, frequency of use, etc. Now, one can always make the classic move (since 1957) of ascribing all those variations to "performance," salvaging one's faith in the purity of the underlying competence. But any research program that sets out to describe competence "first" and deal with performance "later" is going to run into trouble, because pure data that give us direct insights into competence are simply not available. That is precisely why we are all still having this argument. And while we are at it, I am puzzled by the suggestion that we should describe language "first" before any investigation of its biology can be carried out. Should physics be complete before we attempt chemistry? Must biological research stop until we have discovered all the relevant facts from chemistry? Why does linguistic description (field linguistics or self-induced grammaticality judgments) have priority over any other approach to the study of language? Is psycholinguistics a secondary science? Should research on aphasia come to a halt until we know exactly what it is that the aphasic patient has lost? It seems to me that we need all the constraints that we can get, and that all levels of inquiry into the nature of language are valid and mutually informative. The key is to be sure we know which level we are working on. For example, I believe that claims about innateness are biological claims, that require biological evidence. Proof that a given structure is (or is not) universal may be quite interesting and useful to someone who is investigating the biological underpinnings of human language abilities (genetic, or neural), but proofs of universality do not constitute ipso facto evidence for the innateness of some domain-specific linguistic structure, because that structure may be the inevitable but indirect by-product of some constraint (e.g.from information-processing) that is not, in itself, linguistic (e.g. as Matt Dryer points out, some kind of memory constraint). To untangle such problems, many different kinds of evidence will be required, and none of them should be granted priority over the others. -liz bates From kilroe at CSD.UWM.EDU Sat Jan 11 21:55:54 1997 From: kilroe at CSD.UWM.EDU (patricia kilroe) Date: Sat, 11 Jan 1997 15:55:54 -0600 Subject: meaning without form Message-ID: I am more interested in meaning than in form. Granted that form can be studied in disassociation from meaning. How, though, to describe formless meaning, or to make reference to meaning without getting snared by notions that presuppose form (prototypes, categories, attributes). P. Kilroe From cleirig at SPEECH.SU.OZ.AU Sat Jan 11 22:56:59 1997 From: cleirig at SPEECH.SU.OZ.AU (Chris Cleirigh) Date: Sun, 12 Jan 1997 09:56:59 +1100 Subject: amendment to liz bates' comment Message-ID: liz bates wrote: >And while we are at it, I am puzzled by the suggestion that we should >describe language "first" before any investigation of its biology can >be carried out. Should physics be complete before we attempt chemistry? A more congruent question would be: Should chemistry be complete before we attempt physics? On a hierarchy of emergent complexity, chemistry sits above physics just as linguistics sits above biology. A lot of "chemistry" was described before the discipline established its relations with physics. I believe it was called "alchemy". chris From dever at VERB.LINGUIST.PITT.EDU Sat Jan 11 23:02:59 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Sat, 11 Jan 1997 18:02:59 -0500 Subject: form versus meaning In-Reply-To: <199701112110.NAA11937@crl.UCSD.EDU> Message-ID: On Sat, 11 Jan 1997, Elizabeth Bates wrote: > In response to Edith Moravcsik's message, I think there is one more > hidden assumption: that items either are, or are not, members of a > syntactic class, and if they are members, they are members to the > same degree. This is a classic approach to category membership, but > its psychological validity is highly questionable, after more than > two decades of research on prototypicality effects, fuzzy boundaries, > ad hoc categories, context-dependent categorizations, and so on. The concern here is misplaced. The fact that some words, pointed out long ago by Ross, can behave as nouns or verbs or even more or less 'nouny' or 'verby' is neither something that formal linguists are unaware of or that they account for via performance. The question is whether their behavior in specific constructions can be accounted for by discrete, explicit constraints. The answer is, yes. No appeal to performance is needed. > > And while we are at it, I am puzzled by the suggestion that we should > describe language "first" before any investigation of its biology can > be carried out. Would you want to theorize on the evolution of the hand before you understood how hands work? This is a bizarre statement, Liz. Should physics be complete before we attempt chemistry? No, and why do you ask? Oh, I know, you believe that real science must reduce, chemistry to physics, biology to chemistry to physics, etc. That is an empirical hypothesis, not a self-evident fact as seems to be so commonly believed in San Diego. >Why does linguistic description (field linguistics > or self-induced grammaticality judgments) have priority over any other > approach to the study of language? Because core linguistics is after understanding of the subject matter - other approaches assume it (and if they neither understand it nor assume it, they are pointless). > Is psycholinguistics a secondary science? It is certainly derivative and not a primary field of inquiry. This is why it has linguistics built into it - it can only work to the degree that it understands language, at least in most cases I have looked at (processing, acquisition, reading theory, and discourse). > Should research on aphasia come to a halt until we know > exactly what it is that the aphasic patient has lost? The issue is not knowing what has been lost so much as being able to tell eventually what has been lost. Without a clear understanding of morphosyntax and phonology, yes, it would be premature to say much about aphasia. But we do know enough about language/grammar for aphasia research to take place and for mutual growth in both fields to take place. > It seems to > me that we need all the constraints that we can get, and that all > levels of inquiry into the nature of language are valid and > mutually informative. The key is to be sure we know which level > we are working on. You cannot know a level by fiat in science, only by having its constraints, structures, and units worked out. WIthout that, you will not know which level you are on. That said, I agree with you. > For example, I believe that claims about innateness > are biological claims, that require biological evidence. Proof that > a given structure is (or is not) universal may be quite interesting and > useful to someone who is investigating the biological underpinnings > of human language abilities (genetic, or neural), but proofs of > universality do not constitute ipso facto evidence for the innateness > of some domain-specific linguistic structure, because that > structure may be the inevitable but indirect by-product of > some constraint (e.g.from information-processing) that is not, > in itself, linguistic (e.g. as Matt Dryer points out, some kind > of memory constraint). To untangle such problems, many different > kinds of evidence will be required, and none of them should be > granted priority over the others. -liz bates But you are already at the biological level. At that level, yes, all we know about biology and language should be used. But prior to that level (conceptually, not chronologically) we need to understand langauge/grammar first (remember, Chomskian linguistics does not study language, it studies grammar). -- DLE From bralich at HAWAII.EDU Sat Jan 11 23:10:26 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Sat, 11 Jan 1997 13:10:26 -1000 Subject: form versus meaning Message-ID: At 11:10 AM 1/11/97 -1000, Elizabeth Bates wrote: > ... any research program that sets out >to describe competence "first" and deal with performance "later" is >going to run into trouble, because pure data that give us direct >insights into competence are simply not available. That is precisely >why we are all still having this argument. There will never be any pure data that give direct insights into competence any more than there will ever be a tool by which we can measure nounness or verbness. The nature of linguistics (except for phonology), like the nature of psychology (when it refers to the study of psyche--mental phenomena) is indirectly observable only. This, however, does not mean we cannot do research on linguistics or make statements about competence. I realize I am begging the rather obvious comment that empirical science must deal with directly observable phenomena only and therefor the study of competence and most of syntax for that matter is not science. But this is only if we follow the belief that the only relevant objects of scientific study are those which are directly observable and those of the indirectly observable variety, i.e. syntax and nounness, verbness, competence and so on are not science. However, if we accept that any observable phenomenon is a proper object of empirical invesitigation, then competence, syntax, nounness and verbness are all back on the table in their full splendor and we are once again able do discuss competence without embarassment. These statements may seem somewhat barbarous in the light of current thinking, but it seems to me if we really want to throw out items that are only indirectly observable (such as competence and syntax and nounness and verbness) then we must also throw sceinces favorite tool of measurement, mathematics, because there is nothing in it that is directly observable. So, whether or not there is direct evidence for competence, it is still a valid object of scientific investigation whether or not we deal with it first or later. Phil Bralich From TGIVON at OREGON.UOREGON.EDU Sun Jan 12 00:05:34 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Sat, 11 Jan 1997 16:05:34 -0800 Subject: response to Dan E. Message-ID: Actually, the DNA code has a very rich syntax, since there are thge "lexical node" level where triplets of nucleotides map directly to amino acids in the protein chain. Then there are -- in the sequence -- nucleotides (or segments of several nucleotides) that "govern" segments of the latter, by blocking, turning on/off, etc.. And there are a variety of "fillers" nucleotide segment whose function is much less clear and certainly more global. I am not an expert on the details, but I cited several papers on this issue inb my discussion of the degree of abstractness of grammar. Broadly speaking, this corresponds to the increase of abstrractness going "upward" from lexical nodes, to phrasa;l nodes, to clausal nodes, to complex-clause nodes. But of course, the analogy is far that complete. But broadly speaking, DNA is just as rhythmic-hierarchic (while given in a linear sequence) as mucic or language. One of the lousiest thing about both structuralists & functionalists in linguistics is that they don't understand the correlation between degree of abstaction of functional nodes & degree of abstraction of structural nodes that correlate with them. The distinction betweeb "more local" and "more global" functions is precisely what is involved (as in DAN structure...). And in my work on local vs. global coherence in discourse I have tried to point this out at the functional level. As for "licensing" people to practice biology, Dan, all I can do is observe the PROFOUND ignorance linguists seem to exhibit on the subject. Why I lost my license when I quit molecular biology in 1964, I keep reading in order to try & understand what is going on. It behooves others who want to "practice" on a regular basis to maybe do the same. Best, TG From bates at CRL.UCSD.EDU Sun Jan 12 02:19:00 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Sat, 11 Jan 1997 18:19:00 -0800 Subject: form versus meaning Message-ID: The question was not whether competence should be studied -- it should. And many sciences have to deal with indirect evidence. My complaint had to do with the idea that we can somehow study competence "first" and get to the other (what Dan Everett unabashedly calls) "derivative" phenomena later. i don't think we can. We have to do them together, or it will be impossible to understand the data that are supposed to provide insights into competence. Grammaticality judgments, for example, are a kind of performance, but we understand remarkably little about grammaticality judgment as a psychological process, and for that reason, the primary data base of generative linguistics is sometimes pretty shaky. It certainly isn't sound enough to stand alone, and it is not sound enough to serve as the core for any other avenue of study. It is one kind of evidence, but only one, and I am really amazed that anyone thinks that we should pursue that avenue before anything else is done. This is territorial imperialism. -liz bates From john at RESEARCH.HAIFA.AC.IL Sun Jan 12 08:51:11 1997 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Sun, 12 Jan 1997 10:51:11 +0200 Subject: What is this dispute anyway? Message-ID: I don't know about anyone else, but what I find most striking about the current discussion is that, after what seems like an almost interminable silence on funknet (several times in the past year I have thought that maybe I unsubscribed by accident), people have been aroused to involvement not by anything directly related to language but by an abstract ideological dispute where even the basic terms are interpreted in such a variety of ways that there is no hope of even achieving a mutual understanding let alone a resolution. Shouldn't we be a little bit concerned that people with different leanings interpret the same data in opposite ways according to what they regard as their view of language (e.g. Newmeyer vs. Dryer, Prince vs. Aske in the currect debate)? What this suggests to me is that maybe these views are essentially untestable and a matter of faith. If they were testable and people agreed on the data, assuming (as I do) that they are intelligent people and can follow logical arguments, how could they so consistently come to opposite conclusions about the theoretical implications of these data? If people's views on (however they construe) the `autonomy debate' are a matter of faith, I think this discussion is a waste of time, and if they are not a matter of faith, why doesn't anyone seem to be convinced by any data to change their position? Does anyone know of a single linguist who has changed his/her position either way (after completing graduate school, let's say) on the `autonomy debate'? Has anyone seen any data which has made them think 'aha, I used to hold position X on this debate, but now I think position Y is correct'? Will anyone publically admit to this? If so, I would like to hear what data caused such a conversion; at least we would have some evidence which *someone* found convincing enough at some point to admit they had been wrong. If there are few or no linguists who have experienced such a conversion (again, after, say, graduate school), I would like to suggest that we should consider the possibility that maybe this is because both sides of this argument have defined their hypotheses in such a way that no data can falsify them. The argument looks like basically a matter of faith (or a 'hunch', I believe was David Pesetsky's word), with the typical characteristics of such a dispute, in particular reference to poorly understood Higher Authorities (the Hard Sciences, in this case, Mathematics on one side, Biology on the other) to which some participants are claiming to have a Direct Line. 40 years ago, to the eternal discredit of linguistics, Chomsky managed to fool enough linguists into believing that he was a `Mathematician' that he made a research space for himself (he prudently gave up this claim after he had made this space and began to come into contact with real mathematicians who might publically call his bluff, in the unlikely event that they paid any attention to him at all), but the cost was that his type of linguists have been third-rate `Mathematicians' ever since then, complete outsiders in the humanities and social sciences (where they are institutionally located everywhere but MIT) and not taken seriously by the Real Sciences, and as a result they will be fighting for their institutional lives during the inevitable coming budget cuts. If the interest on funknet in the current autonomy debate as opposed to actual analysis of language is any indication, I am enormously concerned that some functional linguists are doing the same thing now, parading vague ideology/`theory' instead of doing real analysis, using Biology, Evolution, and The Brain instead of Mathematics. I personally am in this field because I like analyzing language, and I think it is pathetic to substitute vague speculations based upon third-hand and/or 30-year-old knowledge of other disciplines for doing actual analysis of language. We're linguists, guys, this is what we know about, this is our livelihood, and if we don't start acting like linguists, we aren't going to be anything at all soon. John Myhill From dryer at ACSU.BUFFALO.EDU Sun Jan 12 13:11:17 1997 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Sun, 12 Jan 1997 08:11:17 -0500 Subject: More on autonomy Message-ID: Although the discussion of "autonomy" has faded somewhat the past 24 hours, there is a serious terminological confusion that I think remains unclarified. The term "autonomy" has been used in two different ways, and this has led to apparent disagreement when in fact there has just been misunderstanding. In an earlier message, I distinguished three senses, but since innateness is not an issue in the current discussion, let me narrow it down to the crucial two senses: (1) (strong) sense: One can explain syntactic facts in terms of syntactic notions (2) (weak) sense: Syntax/grammar exists. Syntax has, at least to a certain extent, a life of its own. What makes me acutely aware of these two senses is that I sometimes use the term "autonomy" one way, and sometimes the other way, largely as a function of how the person I'm talking to uses it. In particular, in various discussions with Tom Givon in recent years, I've used it the way he uses it, in the strong sense. And, in various discussions with Fritz Newmeyer in recent years, I've used it the way he uses it, in the weak sense. The real irony is that in recent years, both Tom and Fritz have been giving arguments in print for autonomy in the weak sense, except that Fritz calls it autonomy and Tom doesn't. But whatever you call it, it is clear when you look closely at what they've each written about it, that they are arguing for the same thing. They're both arguing against those functionalists who deny, or who seem to deny, that syntax/grammar even exists. Unfortunately, as I have pointed out to Fritz on occasion, he sometimes strays back and forth between the two senses (presumably because he believes in autonomy in both senses), and while what he is usually arguing for is simply autonomy in the weak sense, he sometimes either argues for autonomy in the strong sense (as in his recent response to my message), or treats an argument for autonomy in the weak sense as if it were an argument for autonomy in the strong sense. But this should not obscure the extent to which Fritz and Tom have been arguing for the same thing. Furthermore, since Fritz invoked Jack Hawkins, saying "Hawkins is absolutely explicit that parsing explanations are compatible with autonomy", it must be stressed that Hawkins (like Fritz) is using "autonomy" in the WEAK sense, and, as far as I know from Hawkins' writings and my own discussions with him, Hawkins does not believe in autonomy in the strong sense. The general moral is: let's not get hung up on the form (the expression "autonomy of syntax") but let's look at the function (i.e. meaning) to which this expression is being put. Matthew From dever at VERB.LINGUIST.PITT.EDU Sun Jan 12 14:16:03 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Sun, 12 Jan 1997 09:16:03 -0500 Subject: form versus meaning In-Reply-To: <199701120219.SAA12991@crl.UCSD.EDU> Message-ID: Territorial imperialism? I like the ring of that. That pretty well describes what goes on at most universities across the country as Chairs argue for resources from Deans. But that is another matter. Here is all I am saying: To make the statement in (i) presupposes (ii) (i) 'I have discovered the historical/biological/psychological/sociological basis for X' (ii) I (or somebody I am talking to) understands X. If that is territorial imperialism, then my mental lexicon will have to open new files for those terms. -- Dan P.S. Tom is right when he says that linguists are profounding ignorant about biology. So are biologists when they are honest. But as George Miller said in Lg. a few years back, we would all benefit from knowing more about everything. So that is a truism. And I know that Tom had graduate training in microbiology. Linguists are also ignorant of theology (my other training), but I am not going to ask them to consider theological arguments - but cf. Mark Baker's chapter in The Polysynthesis Parameter. ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From jaske at ABACUS.BATES.EDU Sun Jan 12 15:47:30 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Sun, 12 Jan 1997 10:47:30 -0500 Subject: What is this dispute anyway? Message-ID: I was distracted for one day and now there is so much I would like to reply to. I will keep it brief though. Or you can just press the D key right now. To Patricia K.: I don't see how you can study "meaning" in the abstract any more than you can study "form" in the abstract, other than in very limited ways. Through typological comparison we can get a sense of what meanings and functions are important for humans to communicate and get expressed in language, more or less frequently, both lexically and grammatically, and how the different instantiations vary. But I don't believe those meanings and functions exist in an abstract (least-common denominator) form. So when you study meaning you have to deal with form-meaning units, i.e. lexemes and constructions. There is just no other way about it. To Edith M.: I just don't see why we would want to restrict our linguistic analysis to the more formal aspects of constructions and ignore their function/meaning pole, their history, and so on. I haven't seen the point since my undergraduate days 15 years ago. I remember going through all those transformations and wondering why things transform themselves into other things, e.g. what was the point of a passive or a dative-shift construction, and why some languages had those alternative constructions and others didn't. To me, studying the formal aspects of such constructions without looking at what they are made for, how they are made, etc. *as a matter of principle*, just doesn't make sense. I came to the early realization that these constructions should not be studied as merely formal operations. These constructions exist for a purpose, and their form reflects the function that they arose for in the first place, even if they have picked up additional bagage along the way. And to me that is the most interesting part of analyzing language/grammar. I realize that our present inability to predict the use of, for example, even the English passive, makes a lot of people skeptical that the study of meaning/function along with form is a realizeable goal. And I realize that until we get a firm hold of the meaning/function of particular constructions, we may need to play around with formal constraints (eg constraints on 'extraction') and correlations found on those constructions. But along with formal constraints we need to analyze semantic and other functional constraints and correlations. We just can't separate the two. Separating the two as a matter of principle, to see how far we can go, or some other such reason, to me is unconscionable. David P. tells us that now they use "functional" categories in their grammatical theory. Well, I'm glad you finally figured out that functions are important to linguistic analysis, even if it just a handful of the wrong kind of functions. But, as you said, your functions are nothing like our functions. Your functions are abstract, pristine, pure, and perhaps even innate. Your language systems are clockwork mechanisms that have a pure form where things move up and down, branching direction is fully consistent, etc. Then, for some strange reason, things get muddled on the surface, maybe a particular construction decides to branch in an inconsistent way and "scrambling rules" mess up an otherwise neat and underlying perfect system. Well, I just do not believe that that is what languages are like. Languages are not underlyingly pristine and then they get messy. They are just plain messy to begin with. And that makes sense once you realize how it is they got that way. The functions of language instantiated in grammar, which you reduce to a handful of abstract formal-functional categories, do not come to us in abstract and pristine form, and they are a lot more than that handful, and they interact with each other in much more complex and richer ways than you can imagine, ... Well, I'll leave it at that. To John M.: I understand your frustration and your points are well taken. Still, I don't see any harm done in these periodic outbursts on linguistics lists. Anything which means contact between members of different linguistics schools, even if it is in a dark room, is welcome, as I see it. I too would like to see more data oriented discussions on this list (any takers?), and I would like to figure out why they don't take place, but that is no reason to stop the other type of discussion. If they come up, it must be for a reason. It sounds to me like you have it all figured out, but there are a lot of people out there who don't (students, for instance). And if the problems are political, that needs to be aired out too. After all, you also seem to have very strong feelings about some of these things and decided to air them out, rather than stop your posting somewhere around the middle. Anyway, I've used up my alloted space for about a week, so I'm stopping here. Do keep it coming. And let's keep it civil. Best, Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Nolako egurra, halako sua "Such as is the wood, thus will be the fire." From pesetsk at MIT.EDU Sun Jan 12 17:05:17 1997 From: pesetsk at MIT.EDU (David Pesetsky) Date: Sun, 12 Jan 1997 12:05:17 EST Subject: What is this dispute anyway? In-Reply-To: <32D90792.563C@abacus.bates.edu> Message-ID: At 3:47 PM -0000 1/12/97, Jon Aske wrote: > David P. tells us that now they use "functional" categories in their > grammatical theory. Well, I'm glad you finally figured out that > functions are important to linguistic analysis, even if it just a > handful of the wrong kind of functions. But, as you said, your functions > are nothing like our functions. Your functions are abstract, pristine, > pure, and perhaps even innate. Your language systems are clockwork > mechanisms that have a pure form where things move up and down, > branching direction is fully consistent, etc. Then, for some strange > reason, things get muddled on the surface, maybe a particular > construction decides to branch in an inconsistent way and "scrambling > rules" mess up an otherwise neat and underlying perfect system. > Well, I just do not believe that that is what languages are like. > Languages are not underlyingly pristine and then they get messy. Do you really think that's what I said? -David Pesetsky ************************************************************************* Prof. David Pesetsky, Dept. of Linguistics and Philosophy 20D-219 MIT, Cambridge, MA 02139 USA (617) 253-0957 office (617) 253-5017 fax http://web.mit.edu/linguistics/www/pesetsky.html From lmenn at CLIPR.COLORADO.EDU Sun Jan 12 17:30:18 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Sun, 12 Jan 1997 10:30:18 -0700 Subject: amendment to liz bates' comment In-Reply-To: <199701112256.JAA21823@fortis.speech.su.oz.au> Message-ID: That's a totally unwarranted slap. You think Lavoisier used Galileo's results to discover oxygen? Yes, it's true that we dont' know what the primitives of brain function are yet; but we still have to study that function with all available tools. No matter how much a neurophysiologist knows about the brain, she can't design an experiment to look at on-line sentence processing without the collaboration of linguists and psycholinguists. On Sun, 12 Jan 1997, Chris Cleirigh wrote: > liz bates wrote: > > >And while we are at it, I am puzzled by the suggestion that we should > >describe language "first" before any investigation of its biology can > >be carried out. Should physics be complete before we attempt chemistry? > > A more congruent question would be: > > Should chemistry be complete before we attempt physics? > > On a hierarchy of emergent complexity, chemistry sits above physics > just as linguistics sits above biology. > > A lot of "chemistry" was described before the discipline established > its relations with physics. I believe it was called "alchemy". > > chris > From lmenn at CLIPR.COLORADO.EDU Sun Jan 12 17:48:37 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Sun, 12 Jan 1997 10:48:37 -0700 Subject: form versus meaning In-Reply-To: <2.2.16.19970110131232.2df73be6@pop-server.hawaii.edu> Message-ID: The way to treat concepts that are not directly accessible (e.g. competence, nouniness, topic, empathy, foreground...) is as 'intermediate constructs'; one tests them by varying some factor(s) that can be manipulated or measured (e.g.how do people rate the attractiveness or humanness of the proposed empathic focus?) and looking at some measurable outcome (e.g. how often do people make the proposed empathic focus the first referent mentioned in a description of an event when that referent is the undergoer?). An intermediate construct gets validated if it helps in making such predictions; it quietly vanishes away, like the Boojum or ether or phlogiston, if it stops being useful. And concepts may be useful for one purpose long after they have stopped being useful for others. I'm very skeptical about 'competence' existing apart from the performances in which it is demonstrated, but it would be absurd to write a reference grammar or a dictionary in terms of 'performance' alone. Lise Menn On Sat, 11 Jan 1997, Philip A. Bralich, Ph.D. wrote: > At 11:10 AM 1/11/97 -1000, Elizabeth Bates wrote: > > ... any research program that sets out > >to describe competence "first" and deal with performance "later" is > >going to run into trouble, because pure data that give us direct > >insights into competence are simply not available. That is precisely > >why we are all still having this argument. > > There will never be any pure data that give direct insights into competence > any more than there will ever be a tool by which we can measure nounness or > verbness. The nature of linguistics (except for phonology), like the nature > of psychology (when it refers to the study of psyche--mental phenomena) is > indirectly observable only. This, however, does not mean we cannot do > research on linguistics or make statements about competence. > > I realize I am begging the rather obvious comment that empirical science > must deal with directly observable phenomena only and therefor the study of > competence and most of syntax for that matter is not science. But this is > only if we follow the belief that the only relevant objects of scientific > study are those which are directly observable and those of the indirectly > observable variety, i.e. syntax and nounness, verbness, competence and so on > are not science. However, if we accept that any observable phenomenon is a > proper object of empirical invesitigation, then competence, syntax, nounness > and verbness are all back on the table in their full splendor and we are > once again able do discuss competence without embarassment. > > These statements may seem somewhat barbarous in the light of current > thinking, but it seems to me if we really want to throw out items that are > only indirectly observable > (such as competence and syntax and nounness and verbness) then we must also > throw sceinces favorite tool of measurement, mathematics, because there is > nothing in it that is directly observable. So, whether or not there is > direct evidence for competence, it is still a valid object of scientific > investigation whether or not we > deal with it first or later. > > Phil Bralich > From pesetsk at MIT.EDU Sun Jan 12 18:02:34 1997 From: pesetsk at MIT.EDU (David Pesetsky) Date: Sun, 12 Jan 1997 13:02:34 EST Subject: What is this dispute anyway? In-Reply-To: Message-ID: I think one can improve on Myhill's diagnosis. There is widespread agreement in principle on the range of possible explanations a linguistic phenomenon might receive. Everyone agrees that facts about language might be due to a specific property of language, to properties that language shares with other functions, to historical factors, and to interactions among any of the above. The central disagreement seems to concern the default explanations we accord to specific linguistic phenomena that are not well understood (i.e. most of them). Suppose we find a fact of language which we can characterize fairly well (but not completely) in language-internal terms. Do we assume that the fact as a whole is language-specific until proven guilty of non-specificity? Or should it be the other way around? Our personal answers to these questions do indeed reflect our hunches and wishes. But we also recognize, presumably, that our goal is to move beyond hunches and wishes to discover the truth of the matter. Being human beings, however, we are easily distracted. We shift all too easily from research-generating propositions like: 1. "I think the explanation lies (partly, entirely) in area X." 2. "As an expert in area X, I can best contribute to research by investigating possible explanations in area X." to research-stifling propositions like "I think the explanation lies in area X because..." 3. "...everything really interesting is in area X." 4. "...area X contains more stuff than any other area, and therefore is better." 5. "...area X is the essence of language." 6. "...no one would ever work outside area X unless they had a major character flaw." 7. "... [New Yorker (latest issue, 1/13/97), cartoon on p.52]." But anyone with a will can separate these distractions from our real business. I agree that there is no general, falsifiable "autonomy thesis" that separates us. But, at the same time, we're not just floating in a sea of nonsense either. Our hunches and prejudices (though they are not in themselves testable hypotheses) can and do suggest competing explanations for actual facts. Discussions of these alternatives can and do change minds (mine, for instance). Myhill writes: > Shouldn't we be a little bit concerned that people with > different leanings interpret the same data in opposite ways according to > what they regard as their view of language.[...] I say "No". I think this situation is quite fine. The problems arise *after* we've offered our varying interpretations of the data. Do we defend our interpretation with specious propositions like 3-7? Or do we try to discover the truth? Since every now and then the second path is taken, I have more hope for the field than Myhill does. -David Pesetsky P.S. I'll try not to bother this list any more. ************************************************************************* Prof. David Pesetsky, Dept. of Linguistics and Philosophy 20D-219 MIT, Cambridge, MA 02139 USA (617) 253-0957 office (617) 253-5017 fax http://web.mit.edu/linguistics/www/pesetsky.html From spikeg at OWLNET.RICE.EDU Sun Jan 12 21:20:06 1997 From: spikeg at OWLNET.RICE.EDU (Spike Gildea) Date: Sun, 12 Jan 1997 15:20:06 -0600 Subject: (fwd) Re: back and forth Message-ID: >---------- Forwarded message ---------- >Date: Fri, 10 Jan 1997 23:14:34 +0200 (IST) >From: ariel mira >To: Tom Givon >Cc: Multiple recipients of list FUNKNET >Subject: Re: back and forth > >Dear Tom, > >I very much agree with your position and Wally Chafe's. Chomskyans are >keen on knocking down functionalism by attacking some imaginary strawman, >such as: If functionally motivated, then 100% motivated. And I think >they've done a very good job among us functionalists, so that we've come >to believe that there are some among us (never me, of course) who actually >believe that 100% of language is transparently and synchronically >motivated. Yes, there was Garcia in the 1970's. But 20 years have passed >since then, and in my attempt to find this extreme position in writing, I >have not come up with anything. All I can find is people like you and Paul >Hopper saying they are NOT adopting the extreme position. > >So, if there is ANYBODY holding that extreme position, please speak >up/give references. If there's nobody holding that position, could we stop >arguing against it? > >Shabbat Shalom, > >Mira From jaske at ABACUS.BATES.EDU Sun Jan 12 21:41:59 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Sun, 12 Jan 1997 16:41:59 -0500 Subject: What is this dispute anyway? Message-ID: David Pesetsky wrote: > > Do you really think that's what I said? David, I'm sorry if I put words into your mouth. I was going by my interpretation (corroborated by many others) of what people in your school, not necessarily you yourself, have been saying for the last few decades, at least until the last time I checked. Perhaps my interpretation was erroneous. If so, I am quite willing to stand corrected. I think that that is what this discussion (I don't dare call it "dispute") is all about. I feel, and I'm sure many others do too, that we need a lot more communication in our field. It may turn out that we agree on more things than we ever thought we did. Although I am personally a bit skeptical about this, I too am after the Truth and not after winning partisan battles. So let's talk. Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Nolako egurra, halako sua "Such as is the wood, thus will be the fire." From jaske at ABACUS.BATES.EDU Sun Jan 12 22:31:22 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Sun, 12 Jan 1997 17:31:22 -0500 Subject: What is this dispute anyway? Message-ID: Well, I think that we can improve on David's diagnosis. I would say that the general belief among functionalists is that formalists adhere to proposition #3, namely that "...everything really interesting is in area X", as opposed to the proposition "...everything really interesting is in areas A-Z." Most functionalists probably also believe (correct me if I'm wrong) that many formalists believe in a modified version of proposition #4 ("...area X contains *LESS* stuff than any other area, and therefore is better."). Same thing about proposition #5 (see original quote below). (I'll skip proposition #6, although things could be said about that too. Unfortunately personal attacks have come from both sides at different times). Perhaps we're just wrong. If so, there is a very big misunderstanding here and we definitely should get it corrected as soon as possible. That's what we're here for. Best, Jon David Pesetsky wrote: ... > 1. "I think the explanation lies (partly, entirely) in area X." > > 2. "As an expert in area X, I can best contribute to research by > investigating possible explanations in area X." > > to research-stifling propositions like "I think the explanation lies in > area X because..." > > 3. "...everything really interesting is in area X." > > 4. "...area X contains more stuff than any other area, and therefore > is better." > > 5. "...area X is the essence of language." > > 6. "...no one would ever work outside area X unless they > had a major character flaw." > > 7. "... [New Yorker (latest issue, 1/13/97), > cartoon on p.52]." > > But anyone with a will can separate these distractions from our real business. > > I agree that there is no general, falsifiable "autonomy thesis" that > separates us. But, at the same time, we're not just floating in a sea of > nonsense either. Our hunches and prejudices (though they are not in > themselves testable hypotheses) can and do suggest competing explanations > for actual facts. Discussions of these alternatives can and do change minds > (mine, for instance). Myhill writes: > > > Shouldn't we be a little bit concerned that people with > > different leanings interpret the same data in opposite ways according to > > what they regard as their view of language.[...] > > I say "No". I think this situation is quite fine. The problems arise > *after* we've offered our varying interpretations of the data. Do we defend > our interpretation with specious propositions like 3-7? Or do we try to > discover the truth? Since every now and then the second path is taken, I > have more hope for the field than Myhill does. > > -David Pesetsky > > P.S. I'll try not to bother this list any more. > > ************************************************************************* > Prof. David Pesetsky, Dept. of Linguistics and Philosophy > 20D-219 MIT, Cambridge, MA 02139 USA > (617) 253-0957 office (617) 253-5017 fax > http://web.mit.edu/linguistics/www/pesetsky.html -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Nolako egurra, halako sua "Such as is the wood, thus will be the fire." From bates at CRL.UCSD.EDU Mon Jan 13 02:24:28 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Sun, 12 Jan 1997 18:24:28 -0800 Subject: form versus meaning Message-ID: Okay, instead of "territorial imperialism", how about "disciplinary hegemony"? In his discussion of the "derivative" status of some fields and the "core" status of linguistics, Dan suggests that the very name of fields with titles like "The sociology of X" imply that the core field is the one called "X". There may be sociological/historical reasons why such terms evolve, but I don't think the argument works (or should work) for the many fields that study language. Linguists study language. So do psycholinguists, neurolinguists, sociolinguists, etc. In this case, "X" is Language. Field linguists and theoretical linguists who use naturally occurring text or grammaticality judgments are studying language, by one set of methods. Psycholinguists are using yet another set of methods to study language. And so on. No one, in my view, is any closer to "the thing itself". what we are talking about here are simply different perspectives on the same problem. Of course there ARE branches of psycholinguistics, aphasiology, etc., in which investigators start with the products of theoretical linguistics and then set out to use them against a particular kind of data. That's one approach (e.g. the search for the "psychological reality of transformations" in the 1960's). It's not the only approach. It is certainly not the DEFINITION of psycholinguistics, neurolinguistics, etc. By insisting that linguistics has priority over other fields that study language, Dan is doing all of us (including the linguists) a disservice. What he really means is that certain METHODS have priority -- because that is all that really, at base, separates linguistics from psycholinguistics, neurolinguistics, etc. -liz bates From fjn at U.WASHINGTON.EDU Mon Jan 13 03:33:10 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Sun, 12 Jan 1997 19:33:10 -0800 Subject: A couple final remarks In-Reply-To: Message-ID: Just a couple final comments, before I return to lurking on the list. First, Mark Durie writes and asks: >So the autonomy hypothesis in Fritz's sense (divorced from any >consideration of innateness) is structuralism. Or have I completely >misunderstood? I obviously have not made myself clear enough. Saussure's position ('structuralism') is that the set of form-meaning pairings (the set of signs) is autonomous. I do accept that, but also something more, namely, the autonomy of syntax. In that hypothesis, the set of form-form interrelationships *also* forms a discrete system, independently of the meanings of those forms or the uses to which they are put. Many accept the former, but not the latter. And Matthew Dryer remarks: > Fritz' response to my comments reflects a substantive difference > that is fundamental to differences between functionalist and > "formalist" approaches. Fritz says > > >>The sensitivity of speakers to the abstract (formal, structural) > >>notion 'Phrase structure branching-direction', a notion that > >>doesn't (as Dryer shows) correlate perfectly with semantic > >>notions such as 'head-dependent' supports the idea that speakers > >>mentally represent abstract phrase-structure relations > >>INDEPENDENTLY of their semantic and functional > >>'implementations'. > > If by this Fritz means that speakers represent the fact that > different structures in their language involve the same branching > direction, then this doesn't follow at all. My hypothesis is that > languages with both left and right branching result in structures > with mixed branching that in language USE are slightly more > difficult to process, and that over the millenia, this has > influenced language change so that one finds crosslinguistic > patterns reflecting a tendency toward more consistent direction of > branching. But this does not entail that the fact that different > structures in the language employ the same direction of branching is > itself represented by speakers. Rather, it simply means that > speaking a language in which structures branch in the same direction > will result in slightly fewer instances of individuals' failing to > extract the intended meaning of an utterance. No, in fact I don't assume that speakers mentally represent the fact that different structures in their language involve the same branching direction (though I don't reject a priori the possibility that they might do so). It's the mere fact of representing branching phrase structure at all INDEPENDENTLY OF THE MEANINGS / FUNCTIONS encoded / carried out by that structure that supports the autonomy of syntax. That's all from me. Best wishes to all, --fritz From dick at LINGUISTICS.UCL.AC.UK Mon Jan 13 09:42:35 1997 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Mon, 13 Jan 1997 09:42:35 +0000 Subject: back and forth Message-ID: I think David Pesetstky is right. So-called `formalists' have been extremely enthusiastic recently about trying to predict syntactic form from semantic function. An even clearer example of this, which he doesn't mention, is in the area of `argument structure' (or, alternatively, `theta roles' or some other kind of semi-semantic structure) as a predictor of syntactic roles (subject, object, etc). A lot of `formalists' have suggested an extremely close connection between the two. I think Chomsky has rejected total predictability (and I think I'd agree with him), but I also think that, like many of us on this list, he thinks it's a good idea to look for tight correlations between semantic `function' and syntactic `form'. So what's the argument about? At 22:11 10/01/97 -0500, you wrote: >At 12:24 PM -0800 1/10/97, Tom Givon wrote: > > >> Seems to me I hear another argument between the partly deaf. Everybody >> concede that a correlation exists between grammatical structures and >> semantic and/or pragmatic functions. But two extremist groups seem to >> draw rather stark conclusions from the fact that the correlation is >> less-than-fully-perfect. The "autonomy" people seem to reason: >> "If less than 100% perfect correlation, >> therefore no correlation (i.e. 'liberated' structure)" > >Do "autonomy people" really reason like this? I don't think so. In fact, >I think it's just the opposite. > >Isn't most of the research by "autonomy people" actually devoted to the >hunch that there is a nearly *perfect* 100% correlation between grammatical >structure and semantic/pragmatic function -- and that "less than 100%" >correlations are actually 100% correlations obscured by other factors? > >- What, after all, is the functional category boom about, if not a >(possibly overenthusiastic) attempt to investigate a 100% correlation >hypothesis for properties like tense, agreement, topic, focus, and so on? > >- What was the motivation for the hypothesis of "covert movement" (LF >movement), if not the hunch that the correlation between grammatical and >semantic/pragmatic structure is tighter than it appears? > >- Why all the effort expended on the unaccusative hypothesis, the >Universal Alignment Hypothesis, and Baker's UTAH, if not in service of the >hypothesis that non-correlations between semantic function and grammatical >form are only superficial? > >I think one might make the case that formalist "autonomy people" are among >the most faithful functionalists. > >What divides linguists in this debate is not, I suspect, their faith in >robust form-function correlations, but rather their hunches about the >repertoire of factors that *obscure* these correlations. That's where many >of us really do disagree with each other. > >-David Pesetsky > > >************************************************************************* >Prof. David Pesetsky, Dept. of Linguistics and Philosophy >20D-219 MIT, Cambridge, MA 02139 USA >(617) 253-0957 office (617) 253-5017 fax >http://web.mit.edu/linguistics/www/pesetsky.html > > Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From john at RESEARCH.HAIFA.AC.IL Mon Jan 13 08:42:05 1997 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Mon, 13 Jan 1997 10:42:05 +0200 Subject: What is this dispute anyway? Message-ID: Oh well, I guess I better say something else. To Talmy G.: I am not advocating security through ignorance. For more than 10 years I wrote functionally oriented articles with very large databases, detailed text counts, statistical significance tests for all my claims, multivariate statistical analysis, etc. I tried my best to bring normal scientific methodology to functional linguistics. I finally gave up on the statistics (to functionalist audiences) because I realized that not only did no one evidently care, but when I even brought up data of this sort, my audiences would get glassy-eyed and lose interest in the whole paper. I consider this is a more serious effort to incorporate scientific methods into functional linguistics than reading popular interpretations of research in the hard sciences and imagining how they might apply to linguistics. I've been reading your articles for more than 15 years now, Talmy, I've seen a lot of numbers, but I have yet to see you do even a single simple statistical significance test, even a chi-square, let alone a regression analysis; your arguments about knowledge and ignorance of science would be more convincing if you yourself actively showed a little scientific knowledge here. Incidentally, it's 'yosif da`at yosif max'ov', not 'mosif da`at mosif max'ov'. To David P.: You write: Discussions of these alternatives can and do change minds (mine, for instance). You've changed your mind about the autonomy thesis? You used to not believe (didn't used to believe?) in autonomous syntax? After you got out of graduate school? Did you put this in print anywhere? And you got a job at MIT? Am I understanding you correctly? Please clarify. (I'm not being facetious, I really am interested in this) In reply to my question: Shouldn't we be a little bit concerned that people with different leanings interpret the same data in opposite ways according to what they regard as their view of language? You write: I say "No". I think this situation is quite fine. The problems arise *after* we've offered our varying interpretations of the data. Do we defend our interpretation with specious propositions like 3-7? Or do we try to discover the truth? I agree with you in principle, but unfortunately that is not the tone the discussion (such as it is) has taken. To take the most blatant example, Chomsky's favorite 'defense' of whatever approach he feels like pursuing at the moment has always been that it is 'interesting,' (your specious proposition #3), e.g. 'Knowledge of language' pg. 5: 'During the past 5-6 years, these efforts have converged in a somewhat unexpected way, yielding a rather different conception of the nature of language and its mental representation, one that offers interesting answers to a range of empirical questions and opens a variety of new ones to inquiry while suggesting a rethinking of the character of others. This is what accounts for an unmistakeable sense of energy and anticipation...' Similarly pg. 4: `This (research program) should not be particularly controversial, since it merely expresses an interest in certain problems...' Such examples could be multiplied many times over (anyone have Chomsky's writings in a text base? search for 'interest'). Having justified choosing a particular approach because it is 'interesting' and gives 'an unmistakeable sense of energy and anticipation,' while other approaches evidently do not, NC can now devote the rest of his book to working out the fine points of this approach. This is particularly significant, and worrying, because the great majority of Chomsky's followers appear to be similarly basing their choice of approach on what Chomsky finds 'interesting' as well, to judge by the general lack of serious effort to give more convincing arguments for this approach. I assume that you (David) yourself are thinking something similar about functionalists, so this appears to be a general property of the field (though I applaud Matthew Dryer's sincere efforts to try to get things straightened out). This is what I am concerned about. John Myhill From dever at VERB.LINGUIST.PITT.EDU Mon Jan 13 12:07:27 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 13 Jan 1997 07:07:27 -0500 Subject: form versus meaning In-Reply-To: <199701130224.SAA18908@crl.UCSD.EDU> Message-ID: I do not believe that a field is defined by its methodology but by the questions that it asks. The core questions of linguistics have to do with phonology, syntax, and morphology (and that will get about 1% agreement on this list). Semantic, discourse, diachronic and other questions are crucial, of course, but all rely to a greater or lesser degree to the answers on morphology, syntax, and phonology, as do questions asked by psycholinguistics. Psycholinguistics is secondary for that reason. -- DLE ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From dever at VERB.LINGUIST.PITT.EDU Mon Jan 13 13:34:47 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 13 Jan 1997 08:34:47 -0500 Subject: What is this dispute anyway? In-Reply-To: Message-ID: John, You are quite right to be upset with linguists who want to wear the 'mantle of science' but do not have the same rigor in their research that other scientists would expect. Many of us have probably been guilty of this. And I think that you are right that Chomsky can be manipulative in his use of words like 'interest' and that his writings are often unquestioningly followed by many syntacticians to the detriment of the field. But discussions like those on this list can change minds. I have read more functional linguistics as a result of being impressed with answers and comments I have read on this list and I have felt the need to answer functional counteranalyses in my own research because I have realized that such analyses are indeed intuitively appealing and well thought out, so that if a particular formal analysis I am proposing is going to be fully convincing to me I must grapple with the functional issues. I spend some time doing this at different places in my new book, _Why there are no clitics_. I also have changed my attitudes positively towards a number of researchers whose work I might have ignored in the past, because of the reasonableness of their replies on this list. Maybe I should have been able to figure out the reasonableness and relevance of functionalist alternatives on my own, without this list, but this list has helped quite a bit. There are still a number of theoretical positions that I hold even more strongly as a result of reading this list (because I am more convinced than ever that the alternatives proposed are weak), but that too has been worthwhile. Lists like this provide a forum for discussing our basic assumptions in ways that refereed publications do not, for good reasons. So don't get too upset with us for discussing these issues of 'ideology' instead of empirical work. It can be beneficial. -- DLE ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From dquesada at CHASS.UTORONTO.CA Mon Jan 13 13:45:53 1997 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Mon, 13 Jan 1997 08:45:53 -0500 Subject: form versus meaning In-Reply-To: Message-ID: On Mon, 13 Jan 1997, Daniel L. Everett wrote: > The core questions of linguistics have to do with > phonology, syntax, and morphology ***(and that will get about 1% agreement*** on this list). Semantic, discourse, diachronic and other questions are > crucial, of course, but all rely to a greater or lesser degree to the > answers on morphology, syntax, and phonology, as do questions asked by > psycholinguistics. [stars mine, DQ] Can you lay the egg? Are you implying that the linguists in this list do not do syntax? Better, are you imlying that syntax is just the sort of closed set game formalists practice? To J. Mayhill: I "was linguistically raised a Chomskyan". During the passage from M.A. to Ph.D. it simply lost its attraction to work on something that by just a matter of faith had to be understood as being real. But I won't go into details about my "heresy". Suffice it to say that there are many like me. Is it also the other way? Diego From dever at VERB.LINGUIST.PITT.EDU Mon Jan 13 14:08:32 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 13 Jan 1997 09:08:32 -0500 Subject: form versus meaning In-Reply-To: Message-ID: On Mon, 13 Jan 1997, Diego Quesada wrote: > > > On Mon, 13 Jan 1997, Daniel L. Everett wrote: > Can you lay the egg? I am afraid that the literary allusion here escapes me. > Are you implying that the linguists in this list do > not do syntax? Better, are you imlying that syntax is just the sort of > closed set game formalists practice? Nope. I am not implying that at all. -DLE From Carl.Mills at UC.EDU Mon Jan 13 17:51:08 1997 From: Carl.Mills at UC.EDU (Carl.Mills at UC.EDU) Date: Mon, 13 Jan 1997 12:51:08 -0500 Subject: Carson Schutze's book Message-ID: Dan Everett writes: "Carson Schutze has recently published a book on methodology (basically for formal linguists), which hopefully will contribute to change in formal theoretic work." Aside from reinventing the wheel MIT fashion and ignoring some earlier work that some of us semi-generative/not-quite-functionalist folks have done on methodology, Schutze's book is pretty good. While it covers some matters that others, e.g., Fritz Newmeyer and Tom Givon, have treated, Schutze's book is encouraging to those of us who have a sneaking hunch that looking closely at what we study and what we actually look at when we are studying it are not wasted activities. Best Carl Mills From bates at CRL.UCSD.EDU Mon Jan 13 18:41:54 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Mon, 13 Jan 1997 10:41:54 -0800 Subject: form versus meaning Message-ID: My computer went down this morning in the middle of a message, which seems to have gone out anyway. On the questionable assumption that Half a Harangue is not better than none, I append the full text, with apologies for troubling everyone 1.5 times to make the same points. -liz In response to Dan Everett's message about the priority of linguistics, my research as a psycholinguist and developmentalist focuses primarily on morphology, syntax and the lexicon. In the psycholinguistic work, I am asking about how these form-meaning mappings (and form-form mappings) are processed in real time, and the results have what I believe to be crucial implications for our understanding of HOW THEY ARE REPRESENTED IN THE MIND. How is that different, other than by methodology, with the work that is conducted in "linguistics proper"? I simply reject Dan's premise that linguists are looking directly at language while the rest of us are squinting sideways. And now let's talk for just a moment more about the use of grammaticality judgment as a way of staring directly at language....I'd like to make four quick points. The first two are NOT examples of psychology trying to have hegemony of linguistics. Rather, they pertain to strictures on methodology that hold (or should hold) in every social science, as well as agriculture, industry, anywhere where the investigator wants to draw inferences that generalize beyond the sample in question. The second two are more psychological in nature, but I believe that they have implications for the most central questions about structure. 1. Representativeness of the data base. If you want to know how your corn crop is going to fare, it is widely acknowledged in agriculture that it would be unwise to look at the four plants right outside your window (assuming this isn't your whole crop....). A truism of all empirical science is that the data base from which we draw our generalizations should be representative (either through random sampling, or careful construction of the data base a priori) of the population to which we hope to generalize. In research on language, this constraint holds at two levels: in the human subjects that we select (e.g. the people who are giving the grammaticality judgments) and in the linguistic materials we choose to look at (e.g. the sentences selected/constructed to elicit grammaticality judgments). These strictures are typically ignored, as best I can tell, in the day-to-day work of theoretical linguists who rely on grammaticality judgments as their primary data base. In fact, we have known since the 1970's that the grammaticality judgments made by linguists do not correlate very well with the judgments made by naive native speakers. These differences have been explained away by stating that naive native speakers don't really know what they are doing, only linguists know how to strip away the irrelevant semantic, pragmatic or performance facts and focus their judgments on the structures they really care about. Which, in turn, presupposes a theory of the boundary conditions on those facts -- introducing, I should think, a certain circularity into the relationship between theory and data. In any case, by using a very restricted set of judges, the assumption that one can generalize to 'native speaker competence" may be at risk. Instead of a theory of grammar, we may have a theory of grammar in Building 10. At this point I should stress that ALL the sciences studying language have problems of generalizability. In psycholinguistics, we want to generalize to all normal adult native speakers of the language in question, but most of our data come from middle class college sophomores. In developmental psycholinguistics, we want to generalize to all normal children who are acquiring this language, but are usually stuck with data from those middle class children willing to sit through our experiments, which means that we may have a theory of language in the docile child....In short, I am not proposing that only linguists have this problem, but I think the problem of generalizability may be more severe if grammaticality judgments come from only a handful of experts. 2. Reliability of the data base. If you weigh your child on your bathroom scales twice in a row, and get a reading of 50 pounds on one measurement and 55 pounds on the next, you need to worry about the reliability of your instrument, i.e. the extent to which it correlates with itself in repeated measurements. Reliability is a serious problem in every science, and is often the culprit when results don't replicate from one laboratory to another (or from one experiment to another in the same laboratory). My experience in graduate courses in syntax and other limited exposure to theoretical linguistics suggests to me that there may be a reliability problem in the use of grammaticality judgments. Even in the same restricted set of judges, with similar sentences materials (see above), a sentence that is ungrammatical at 4 p.m. may become grammatical by 6 o'clock, at the end of a hard day. To be sure, there are many kinds of errors that EVERYONE agrees about, EVERY time they are presented. But these clear cases are not the ones that drive the differences between formal theories, as best I can tell. Theoretical shifts often seem to depend on the more subtle cases -- the very ones that are most subject to the reliability problem. And of course, reliability interacts extensively with the representativeness problem described above (i.e. performance on one half of the target structures in a given category may not correlate very highly with performance on the other half, even though they are all supposed to be about the same thing...). 3. Timing. In a recent paper in Language and Cognitive Processes, Blackwell and Bates looked at the time course of grammaticality judgment, i.e. the point at which a sentence BECOMES ungrammatical for naive native speakers. The punchline is that there is tremendous variability over sentences and over subjects in the point at which a sentence becomes "bad", even for sentences on which (eventually) everyone agrees that a violation exists. For some error types, it is more appropriate to talk about a "region" in which decisions are made, a region that may span a lot of accumulating structure. This is relevant not only to our understanding of grammaticality judgment as a psychological phenomenon, but also to our understanding of the representations that support such judgments: if two individuals decide that a sentence is "bad" at completely different points (early vs. late), then it follows that they are using very different information to make their decision, a fact that is surely relevant for anyone's theory of well-formedness. 4. Context effects. Finally, there are multiple studies showing that violations interact, with each other and with the rest of the sentence and discourse context. A sentence that is "bad" in one context may be "good" in another, and a sentence that is "bad" with one set of lexical items may become "good" with a slightly different set, even though those two sets do not differ along what are supposed to be grammatically relevant conditions (i.e. we substitute a transitive verb for a transitive verb, an animate noun for an animate noun, and so forth). My point is NOT to denigrate linguistic methodology, because I have nothing to offer that is better. But I think the above problems should make us worry a lot about a "core" theory that is built exclusively out of one kind of data. To go back to my first point, in my first volley during this discussion (this should probably be my last, to round things out): we need all the constraints we can get, all the data we can get, all the methods we can find, and it is not yet the moment to declare that any of these methods or fields have priority over the others. -liz bates From dever at VERB.LINGUIST.PITT.EDU Mon Jan 13 18:47:57 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 13 Jan 1997 13:47:57 -0500 Subject: form versus meaning In-Reply-To: <199701131841.KAA24899@crl.UCSD.EDU> Message-ID: Liz and all, I have already said that the methodology of theoretical linguistics 'needs fixed' (Pittburghese). So I have no quarrel with anyone who wants to see improvement there. But, dear Liz, if you really think that your studies are explicating the nature of morphological, phonological, or syntactic structure in anything like the detail or degree (or even quality) of morphology, phonology, or syntax proper, then the problem is deeper than I feared and is not going to be resolved on this list. Maybe next time you are in Pittsburgh we can talk about it. Or maybe you could read some morphology, syntax, or phonology with a view to asking whether you are discovering the basic structures and constructs anew in your studies or finding replacements for the basics. I am at a loss. I did not say that you were squinting at anything sideways. Nor do I mean to denigrate your field of study or results. But you simply are not studying the core nature of x when you study its implementations, acquisition, processing, etc. You are assuming it. I am sorry to have to be the one to break this news to you. -- Dan ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From ellen at CENTRAL.CIS.UPENN.EDU Mon Jan 13 20:39:10 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Mon, 13 Jan 1997 15:39:10 EST Subject: form versus meaning In-Reply-To: Your message of "Mon, 13 Jan 1997 10:41:54 PST." <199701131841.KAA24899@crl.UCSD.EDU> Message-ID: I'd sworn I'd go back into lurkitude, but I can't help it -- these are very important issues to me. Have I missed something or are we talking about different things? I understood Dan to say that syntax, phonology, ... were 'core' and Liz to say there's no such distinction and that psycholinguists were as 'core' as syntacticians. And now we get a long argument from Liz entirely in terms of methodology. What is the relevance? Methodologies can vary and objects of study can vary and they can vary independently... I for one agree with everything Liz says about methodology and everything Dan says about 'coreness', is why this troubles me. P.S.: >Instead of a theory of grammar, we >may have a theory of grammar in Building 10. Would that we did! I believe we'd only have a theory of Building 10's linguistic META-intuitions (conscious, accessible intuitions about their unconscious, inaccessible linguistic intuitions). If we actually had a theory of even ONE person's real grammar (i.e. real, unconscious, inaccessible linguistic intuitions), that would be just fine, as far as I'm concerned. From jaske at ABACUS.BATES.EDU Tue Jan 14 01:28:21 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Mon, 13 Jan 1997 20:28:21 -0500 Subject: methodologies {was Re: form versus meaning} Message-ID: This is related to the methodology strand of this schizophrenic conversation. I think that ALL of us need to learn to look at and analyze language in a different number ways and using different methodologies, not just each one doing their own separate thing. For many years I was an typical syntactician. I looked at sentences in isolation and got real good at parsing and devising fancy trees. Then I started to look at a phenomenon, word order in Basque, which just didn't make sense in those terms. So I started making recordings and spending months and months transcribing just a few hours of tape, paying attention at how people actually speak, intonation, intonation units, pragmatic factors, etc, etc. I can sincerely tell you that that opened my mind. Now I look at language very differently. My conclusion: everybody should learn to look at language in as many ways as possible. Introspecting about it, devising experiments, etc. But I think that the first and primary way should be to look at language as it is actually used. It is different. Believe me. And something else, you would probably have to go through dozens of hours of transcripts to come up with one example of some of the phenomena that fill many theoretical journals these days. The core stuff, you know. Anyway, I tried to keep it short. I sense that some people are starting to get tired. Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Lagun onak, ondu; gaiztoak, gaiztotu "A good friend makes one a better person, a bad one a worse one." From ellen at CENTRAL.CIS.UPENN.EDU Tue Jan 14 06:06:53 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Tue, 14 Jan 1997 01:06:53 EST Subject: methodologies {was Re: form versus meaning} In-Reply-To: Your message of "Mon, 13 Jan 1997 20:28:21 EST." <32DAE135.319E@abacus.bates.edu> Message-ID: jon, i was nodding vigorously in agreement all thru your post on methodologies -- until i did a doubletake at this line: >And something >else, you would probably have to go through dozens of hours of >transcripts to come up with one example of some of the phenomena that >fill many theoretical journals these days. The core stuff, you know. i did a doubletake because i first read it as an argument for NOT using naturally-occurring data, which didn't gibe with the preceding paragraphs. then it hit me that you might have meant something quite different... are you perhaps saying that low frequency phenomena are any less crucial to the whole story than high frequency ones?!? presumably you wouldn't think much of a theory of hematology that left out type o blood (or whichever is the least frequent)? in fact, in syntax at least, it's mainly in the rarer forms that you see what is actually going on, structurally speaking... in any event, my real response to your line above is: thank goodness for the humongous online corpora we have today! and let's hope they're all fully parsed and tagged real soon. we are really the first 'generation' of linguists that CAN work on low frequency phenomena using naturally-occurring data (and without having to have the perseverence and energy of a jespersen). i actually find this the most exciting thing to have happened in my professional lifetime -- it has made my own methodology of choice actually feasible, plus it has enabled historical syntacticians to do some really sound work (given the total impossibility of intuitionistic or experimental data for dead lgs). From bralich at HAWAII.EDU Tue Jan 14 07:04:53 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Mon, 13 Jan 1997 21:04:53 -1000 Subject: methodologies {was Re: form versus meaning} Message-ID: At 03:28 PM 1/13/97 -1000, Jon Aske wrote: >This is related to the methodology strand of this schizophrenic >conversation. >My conclusion: everybody should learn to look at language in as many >ways as possible. Introspecting about it, devising experiments, etc. >But I think that the first and primary way should be to look at language >as it is actually used. It is different. Believe me. And something >else, you would probably have to go through dozens of hours of >transcripts to come up with one example of some of the phenomena that >fill many theoretical journals these days. The core stuff, you know. Well, I hate to sound a somewhat controversial note, especially when I agree with the majority of what is being said, but I think it should be born in mind that, at some level syntax is represents organizational principles that may not be completely visible through usage and because of this, it might be necessary to do a lot of the work in this area based on the judgements of the experts. If we limited ourselves to the mathematics that is used in daily life, we would have very primitive mathematics, and if linguists limit themselves to what occurs in very daily life, we would (and do) have very primitive linguistics. Linguists might have a good description of daily life language, but they do not have a very good picture of the organizational principles that are represented in langauge. Now at some point we as linguists are going to have to find some measure of the value of our research to justify our existence in these days of budget cuts. If we as a discipline can present nothing to the outside world besides squabbling that is meaningless except to a few experts, our days are numbered. Especially if this squabbling looks as though we have yet to agree on who we are and what we do. So whether or not we resolve any of these disputes, it seems to me the onus on the discipline should be on bringing forth some tangible result of the 30 years of research that has given us linguistics departments and jobs. I realize that we as insiders to this field can point to many contributions that have been made by linguists and others in this area; however, I doubt that there are many others outside the field (even among those who decide the future of departments) who have any idea what it is we do, even after 30 years. And if we cannot say who or what we are, how can they be expected to continue to fund us. Phil Bralich what if we limited our knowledge of math to Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From cumming at HUMANITAS.UCSB.EDU Tue Jan 14 18:53:50 1997 From: cumming at HUMANITAS.UCSB.EDU (Susanna Cumming) Date: Tue, 14 Jan 1997 10:53:50 -0800 Subject: Language in daily life Message-ID: Bralich says, "if linguists limit themselves to what occurs in very daily life, we would (and do) have very primitive linguistics." We may have a very primitive linguistics -- I hope so, because that would imply we're going to know a lot more someday -- but that would certainly not be because we know about language that occurs in daily life. Indeed, this is precisely the kind of language we know least about, because it is only very recently that linguists have had the tools they needed to look at it seriously. As Aske has pointed out, if you take everyday, interactional, spoken language seriously on its own terms -- that is, without editing it first into something that resembles written language -- you have to start by abandoning or at least fundamentally re-examining many of the basic concepts that underlie "traditional" linguistics, for instance "sentence". This is why some of us feel strongly that no matter what tools the linguist has in their tool-bag -- and sure, I agree that the more we have the better -- one of them is in fact a sine qua non: access to natural, interactional, spoken discourse. Experience shows that such data tends to lead to radically different conclusions at the levels both of description and of explanation. As far as the impact of linguistics on the outside world is concerned, surely it is by knowing something about what people really do with language that is going to have practical applications that will impress non-linguists. A computational linguist in particular should appreciate this -- effective interfaces which use natural language need to be able to deal with actual speaker-hearers, not idealized ones, and they need to be able to take into account the dynamics of interaction, as much exciting work in computational linguistics is doing these days. Susanna From jaske at ABACUS.BATES.EDU Tue Jan 14 19:14:36 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Tue, 14 Jan 1997 14:14:36 -0500 Subject: methodologies {was Re: form versus meaning} Message-ID: I didn't mean to imply that we should restrict our analysis to that which is commonest or most trite, but rather that we should base of 'theoretical edifice' on it. If we don't have a solid grasp on the most basic stuff, we will never understand the nature of that which is more complex and more rare (or why it is more rare). If we base our edifice on that which is unusual and relatively complex we are likely to misunderstand that which is basic (to speakers and to language/s). Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Munduan nahi duenak luzaroan bizi, oiloekin ohera eta txoriekin jagi "If you want to live long, go to bed with the chickens and get up with the birds." From bralich at HAWAII.EDU Tue Jan 14 20:05:47 1997 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Tue, 14 Jan 1997 10:05:47 -1000 Subject: Language in daily life Message-ID: At 08:53 AM 1/14/97 -1000, Susanna Cumming wrote: >We may have a very primitive linguistics ... description and of explanation. Good point. >As far as the impact of linguistics on the outside world is concerned, >surely it is by knowing something about what people really do with >language that is going to have practical applications that will impress >non-linguists. A computational linguist in particular should appreciate >this -- effective interfaces which use natural language need to be able to >deal with actual speaker-hearers, not idealized ones, and they need to be >able to take into account the dynamics of interaction, as much exciting >work in computational linguistics is doing these days. Things are beginning to show up on computers and in the marketplace, but I am skeptical about the state of the art. There are more patches and fixes then there are real tools for my way of thinking. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)5393924 From ellen at CENTRAL.CIS.UPENN.EDU Tue Jan 14 20:27:16 1997 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Tue, 14 Jan 1997 15:27:16 EST Subject: Language in daily life In-Reply-To: Your message of "Tue, 14 Jan 1997 10:53:50 PST." Message-ID: huh? speak for yourself, please. i certainly have NOT found that spoken language is very different from what one would think. the main 'surprises' i have found are: 1. certain forms that have been claimed to be ungrammatical do in fact seem to be well-formed but in very constrained contexts (e.g. for english: resumptive pronoun relative clauses where no island violations are involved, topicalized indefinites, nonrestrictive _that_...). 2. certain claims about genre/style/... distribution are unfounded (e.g. the claim that left-dislocation is characteristic of 'unplanned' speech, the 20-yr-ago claim that only yinglish speakers could topicalize certain things...). 3. most claims about topichood and focus. i certainly have found no evidence to question the notion 'sentence' -- au contraire -- just try to account for how entities that are introduced by quantified expressions are referred to subsequently without a notion of 'clause', much less sentence! and none of the computational linguists whose work i find interesting have abandoned the notion 'sentence' either... i hope people are more careful making generalizations about their data than some people seem to be when making generalizations about 'what funknetters believe' or what 'people who work on interactional, spoken language find'... Susanna Cumming wrote: >Bralich says, > >"if linguists limit themselves to what occurs in very daily life, we would >(and do) have very primitive linguistics." > >We may have a very primitive linguistics -- I hope so, because that would >imply we're going to know a lot more someday -- but that would certainly >not be because we know about language that occurs in daily life. Indeed, >this is precisely the kind of language we know least about, because it is >only very recently that linguists have had the tools they needed to look >at it seriously. As Aske has pointed out, if you take everyday, >interactional, spoken language seriously on its own terms -- that is, >without editing it first into something that resembles written language -- >you have to start by abandoning or at least fundamentally re-examining >many of the basic concepts that underlie "traditional" linguistics, for >instance "sentence". This is why some of us feel strongly that no matter >what tools the linguist has in their tool-bag -- and sure, I agree that >the more we have the better -- one of them is in fact a sine qua non: >access to natural, interactional, spoken discourse. Experience shows that >such data tends to lead to radically different conclusions at the levels >both of description and of explanation. > >As far as the impact of linguistics on the outside world is concerned, >surely it is by knowing something about what people really do with >language that is going to have practical applications that will impress >non-linguists. A computational linguist in particular should appreciate >this -- effective interfaces which use natural language need to be able to >deal with actual speaker-hearers, not idealized ones, and they need to be >able to take into account the dynamics of interaction, as much exciting >work in computational linguistics is doing these days. > >Susanna From pesetsk at MIT.EDU Tue Jan 14 21:52:41 1997 From: pesetsk at MIT.EDU (David Pesetsky) Date: Tue, 14 Jan 1997 16:52:41 -0500 Subject: three final replies In-Reply-To: Message-ID: A CLARIFICATION: At 10:42 AM +0200 1/13/97, John Myhill wrote: > To David P.: You write: > > Discussions of these alternatives can and do change minds > (mine, for instance). > > You've changed your mind about the autonomy thesis? You used to not believe > (didn't used to believe?) in autonomous syntax? After you got out of > graduate school? Did you put this in print anywhere? > And you got a job at MIT? Am I understanding you correctly? Please clarify. > (I'm not being facetious, I really am interested in > this) Sorry, nothing that exciting. What I said was the following. The disagreement that has occupied us the most here is really a difference over hunches, interests and research strategies. So there can't be much question of true and false, nor is the notion "changing one's mind" well-defined for hunches and interests. On the other hand, this "hunch-level" disagreement does produce analyses and discussions of particular phenomena which, not surprisingly, can be true and false, and can be matters on which one changes one's mind. I have been in countless discussions about whether some phenomenon is properly attributable to a discourse factor, to a property of sentence-internal syntax, or some mixture of these. On such matters, people's opinions should, can, and do change in response to reasoned discussion and argument. That's what I meant. ***************** ON THE NOTION "INTERESTING": I wrote: > The problems > arise *after* we've offered our varying interpretations of the data. > Do we defend our interpretation with specious propositions like 3-7? > Or do we try to discover the truth? To which John Myhill replied: > I agree with you in principle, but unfortunately that is not the tone the > discussion (such as it is) has taken. To take the most blatant example, > Chomsky's favorite 'defense' of whatever approach he feels like pursuing at > the moment has always been that it is 'interesting,' (your specious > proposition #3), [...] [Chomsky quotes omitted] > > Such examples could be multiplied many times over [...] > This is particularly significant, and > worrying, because the great majority of Chomsky's followers appear to be > similarly basing their choice of approach on what Chomsky finds > 'interesting' as well, to judge by the general lack of serious effort to > give more convincing arguments for this approach. I assume that you (David) > yourself are thinking something similar about functionalists, so this > appears to be a general property of the field. I think it's a general property of *people*. If we're given the opportunity, we do what we find most interesting. Then we act as though "interesting" is an argument for something. Sure Chomsky's guilty of this. Who isn't? It's even argued that the false argument serves the useful purpose of focusing research, though that, of course, is a two-edged sword (to mix a metaphor). The trick is to learn how to see where "interesting" is being used as an argument, discard that non-argument without rancor, and examine what's left in a serious fashion. On a related issue, do consider the possibility that at least some of the people you call "Chomsky's followers" look like followers because they share (some of) his interests -- rather than sharing his interests because they're followers. ***************** FINALLY: Jon Aske wrote (two days ago, sorry for the delay): > David, I'm sorry if I put words into your mouth. I was going by my > interpretation (corroborated by many others) of what people in your > school, not necessarily you yourself, have been saying for the last few > decades, at least until the last time I checked. Thanks for your remarks. It's an easy but unproductive shortcut to criticize X for what Y says Z (who went to graduate school with X) thinks. But it's usually also unfair. The issue at hand was a certain characterization of work on functional categories. I'd like to address that further, but I don't think I can do that here and now. A book currently being written by Guglielmo Cinque may soon be the best place to look for good work (in my linguistic neck of the woods) on the topic. But that's not fully written yet. He cites lots of the typology literature, by the way. > Perhaps my > interpretation was erroneous. If so, I am quite willing to stand > corrected. I think that that is what this discussion (I don't dare call > it "dispute") is all about. I feel, and I'm sure many others do too, > that we need a lot more communication in our field. It may turn out > that we agree on more things than we ever thought we did. I suspect the opposite. I suspect that we *disagree* on more things than we ever thought we did. But what's wrong with that, so long as discussions address the real disagreements -- not specious ones rooted in primeval animosities or based on logic like proposition 7 of my previous message? (Anyone look it up?) Thanks for the discussion, David Pesetsky ************************************************************************* Prof. David Pesetsky, Dept. of Linguistics and Philosophy 20D-219 MIT, Cambridge, MA 02139 USA (617) 253-0957 office (617) 253-5017 fax http://web.mit.edu/linguistics/www/pesetsky.html From ward at PG-13.LING.NWU.EDU Wed Jan 15 18:44:36 1997 From: ward at PG-13.LING.NWU.EDU (Gregory Ward) Date: Wed, 15 Jan 1997 12:44:36 CST Subject: three final replies In-Reply-To: ; from "David Pesetsky" at Jan 14, 97 4:52 pm Message-ID: david pesetsky writes: > I have been in countless discussions about whether some phenomenon is > properly attributable to a discourse factor, to a property of > sentence-internal syntax, or some mixture of these. On such matters, > people's opinions should, can, and do change in response to reasoned > discussion and argument. That's what I meant. but, alas, they often don't. case in point: in a recent language paper (vol 71:722-42), betty birner and i present evidence for a phenomenon being motivated by discourse rather than by 'sentence-internal syntax'. the phenomenon in question is the so-called definiteness effect in postverbal position in english existential there-sentences. however, one still sees many references to such an effect (without discussion or justification) in the formal syntax literature. now it is of course possible that one could come up with a strictly syntactic account of the (indisputable) occurrence of definite postverbal NPs in this construction (although i doubt it :-) ), but until that time, unqualified references to a syntactically-motivated 'definiteness effect' should simply disappear. but that hasn't happened. in fact, the discourse accounts (and there are several) often aren't even cited (not even in a dismissive "but cf." kind of way). this is the state of affairs that is so frustrating to those of us, inter alia, who believe in the existence of both discourse and syntax, and who try to listen to what practitioners of both both have to say. gregory -- Gregory Ward Department of Linguistics Northwestern University 2016 Sheridan Road Evanston IL 60208-4090 e-mail: gw at nwu.edu tel: 847-491-8055 fax: 847-491-3770 www: http://www.ling.nwu.edu/~ward > > ***************** > > ON THE NOTION "INTERESTING": > > I wrote: > > > The problems > > arise *after* we've offered our varying interpretations of the data. > > Do we defend our interpretation with specious propositions like 3-7? > > Or do we try to discover the truth? > > To which John Myhill replied: > > > I agree with you in principle, but unfortunately that is not the tone the > > discussion (such as it is) has taken. To take the most blatant example, > > Chomsky's favorite 'defense' of whatever approach he feels like pursuing a= > t > > the moment has always been that it is 'interesting,' (your specious > > proposition #3), [...] > > [Chomsky quotes omitted] > > > > Such examples could be multiplied many times over [...] > > This is particularly significant, and > > worrying, because the great majority of Chomsky's followers appear to be > > similarly basing their choice of approach on what Chomsky finds > > 'interesting' as well, to judge by the general lack of serious effort to > > give more convincing arguments for this approach. I assume that you (David= > ) > > yourself are thinking something similar about functionalists, so this > > appears to be a general property of the field. > > I think it's a general property of *people*. > > If we're given the opportunity, we do what we find most interesting. Then > we act as though "interesting" is an argument for something. > > Sure Chomsky's guilty of this. Who isn't? It's even argued that the false > argument serves the useful purpose of focusing research, though that, of > course, is a two-edged sword (to mix a metaphor). > > The trick is to learn how to see where "interesting" is being used as an > argument, discard that non-argument without rancor, and examine what's left > in a serious fashion. > > On a related issue, do consider the possibility that at least some of the > people you call "Chomsky's followers" look like followers because they > share (some of) his interests -- rather than sharing his interests because > they're followers. > > ***************** > =46INALLY: > > Jon Aske wrote (two days ago, sorry for the delay): > > > David, I'm sorry if I put words into your mouth. I was going by my > > interpretation (corroborated by many others) of what people in your > > school, not necessarily you yourself, have been saying for the last few > > decades, at least until the last time I checked. > > Thanks for your remarks. It's an easy but unproductive shortcut to > criticize X for what Y says Z (who went to graduate school with X) thinks. > But it's usually also unfair. > > The issue at hand was a certain characterization of work on functional > categories. I'd like to address that further, but I don't think I can do > that here and now. A book currently being written by Guglielmo Cinque may > soon be the best place to look for good work (in my linguistic neck of the > woods) on the topic. But that's not fully written yet. He cites lots of > the typology literature, by the way. > > > Perhaps my > > interpretation was erroneous. If so, I am quite willing to stand > > corrected. I think that that is what this discussion (I don't dare call > > it "dispute") is all about. I feel, and I'm sure many others do too, > > that we need a lot more communication in our field. It may turn out > > that we agree on more things than we ever thought we did. > > I suspect the opposite. I suspect that we *disagree* on more things than > we ever thought we did. But what's wrong with that, so long as discussions > address the real disagreements -- not specious ones rooted in primeval > animosities or based on logic like proposition 7 of my previous message? > (Anyone look it up?) > > > Thanks for the discussion, > David Pesetsky > > > > > > > ************************************************************************* > Prof. David Pesetsky, Dept. of Linguistics and Philosophy > 20D-219 MIT, Cambridge, MA 02139 USA > (617) 253-0957 office (617) 253-5017 fax > http://web.mit.edu/linguistics/www/pesetsky.html > From maj at COCO.IHI.KU.DK Fri Jan 17 15:34:20 1997 From: maj at COCO.IHI.KU.DK (Maj-Britt Mosegaard Hansen) Date: Fri, 17 Jan 1997 16:34:20 +0100 Subject: Book review Message-ID: As review editor for the _Revue romane_, I'm looking for someone who'd be willing and able to write a 1-3 page review *in French* of the following volume: Kronning, Hans. 1996. Modalite, cognition et polysemie : semantique du verbe modal 'devoir'. Uppsala: Acta Universitatis Upsaliensis. If anyone out there would like to undertake this task, please reply to maj at coco.ihi.ku.dk Thanx in advance! Maj-Britt Mosegaard Hansen Dept. of Romance Languages U. of Copenhagen From edith at CSD.UWM.EDU Fri Jan 17 23:50:16 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Fri, 17 Jan 1997 17:50:16 -0600 Subject: form without meaning Message-ID: ===> To LIZ BATES: Liz, you suggested (Saturday, January 11) that the claim that syntactic form could be described independently of meaning assumed that syntactic classes were of the classical type, with strict category membership. You further pointed out that syntactic categories were not in fact of this sort and that their fuzzy nature was difficult to explain without reference to their meanings. I agree that, for EXPLAINING the existence of natural syntactic categories, we may have to resort to studying their meanings. This does not mean, however, that natural classes cannot be discovered and described on strictly formal grounds; in this respect, I agree with Dan Everett's (same-day) response to you. I found your later message on the four problems in obtaining grammaticality judgments very interesting and instructive! ===> to JON ASKE: Jon, in your response to my response to your original posting (Sunday, January 12), you wrote this: "I just don't see why we would want to restrict our linguistic analysis to the more formal aspects of constructions and ignore their function/meaning pole, their history, and so on. .... To me, studying the formal aspects of such constructions without looking at what they are made for, how they are made, etc. *as a matter of principle*, just does not make sense. I came to the early realization that these constructions should not be studied as merely formal operations. These constructions exist for a purpose, and their form reflects the function that they arose for in the first place, even if they have picked up additional bagage along the way. And to me that is the most interesting part of analysing language/grammar." I agree with almost all of this. In particular, I agree that a/ linguistic analysis should not be restricted to form, with function ignored (in my contribution, I did not mean to suggest the opposite) b/ constructions exist for a purpose and figuring out the extent and the ways they reflect function is the most interesting part of analysing grammar. Where I may not agree is that studying the form of constructions without looking at their function makes no sense. It really depends on what you mean by "studying". If you mean "giving a complete account", then I fully agree: describing the functions of linguistic form and how they correlate with form is part of a complete account. But if by "studying" you mean "restricting momentary attention" (where "momentary" may be taken on a grand sale, possibly extending to the lifetime of a linguist), then I cannot agree. I see the study of linguistic form as a logically necessary step in arriving at a complete account of linguistic constructions since, as Talmy Givo'n pointed out (Saturday, January 11), if a complete account involves specifying a (cor)relation between form and function, this presupposes that we have an independent characterization of both form and function. This point does not have to do with research schedule: I am not proposing that all of form needs to have been discovered before we can begin to look at function. The two lines of research usually go in tandem I believe. Rather, the point has to do with the logical priority of a description of form and a description of meaning over an account of the relationship between the two. At the very real risk of battling a straw man or beating a dead horse (and without attributing this extreme view to Jon Aske), let me note that what the idea - when taken in a literal sense - that the form of functional objects cannot be described unless one knows the associated functions would amount to is that the usual descriptive tools we use for characterizing the form of a non-functional object would simply fail us in the case of functional objects: we would have to hold off on their formal description until we found out about their functions. This would mean, for example, the following: - One could describe the formal structure of a string of beads a child would create with no purpose in mind but not the shape of a rosary - unless one knew that the beads stood for various prayers. - One could describe the chemical composition of naturally occurring materials but not that of synthetic drugs, unless we knew what each component was supposed to contribute to the intended healing effect. - One could describe the form of a musical composition free of containing designated motifs with explicit meanings but not the form of a Wager opera - unless one knew what each motif stood for. - One could describe the random hand-flailings of an infant, but not the hand gestures of body language or sign language, unless one knew the meanings of the gestures. - One could describe the form of a piece of rock naturally shaped as a hammer but not the shape of a real hammer, unless one recognized it for an instrument for pounding in nails. - One could describe the form of the Easter Island statues just in case they were meant to be non-functional; if they were meant to be functional, no description would be possible unless one learnt what the functions were. - One could describe the form of non-symbolic carvings on a rock surface but not if those carvings happened to be samples of writing; in which case we would have to discover what each symbol stood for before being able to characterize the forms of the symbols. Such descriptive impoasses caused by lack of knowledge about the function of the object to be described clearly do not arise. Where knowlefge about function comes in for the analysis of form is on the explanatory, rather than descriptive, level: in helping to explain why the form of a functional object is the way it is. Best - Edith ************************************************************************ Edith A. Moravcsik Department of Linguistics University of Wisconsin-Milwaukee Milwaukee, WI 53201-0413 USA E-mail: edith at csd.uwm.edu Telephone: (414) 229-6794 /office/ (414) 332-0141 /home/ Fax: (414) 229-6258 From jaske at ABACUS.BATES.EDU Sat Jan 18 04:00:04 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Fri, 17 Jan 1997 23:00:04 -0500 Subject: form without meaning Message-ID: Edith (and all), Thank you for your very interesting and thought-provoking posting. It made me think and here are the results, in condensed form, of my thoughts about all this. A lot of the things I am arguing against do not necessarily follow from what you said, that is I am not saying that that is your position, though it may be an extreme version of your basic position. I am just trying to understand why anybody would want to exclude function and explanation and non-formal factors from description at all. None of this is meant to be taken personally by anyone. (Paraphrasing what David said the other day, I do not believe who people who disagree with me have "character flaws"). I agree with you that the description of the data is our primary goal. Constructions, no matter how well motivated, may have a great deal of idiosynchratic characteristics which have to be described and, as long as we don't understand the functions involved, all we can do is describe the facts in as much detail as we can and draw as many generalizations as we can, facts and generalizations about form, *and* about meaning, about usage, about anything and everything which correlates with form and any variations in form. And, in many cases at least, these descriptions are but steps that we follow in order to understand the constructions and what they do and why they are the way they are. That is, while we describe, we can, and should, start making guesses as to what motivates the constructions, diachronically *and* synchronically. In my work I deal with speech act constructions and here I find that discourse pragmatics and information structure are central to understanding these constructions: their form and their uses. Notions such as topic, focus, topicality and "focality", contrast, emphasis, scope, etc., etc. And the motivations I see here are not just diachronically sedimented on the constructions in question, and irrelevant to the synchronic description of the the constructions. I believe that the motivations for these constructions are to a large extent synchronically transparent and real for the speakers as well, that is, that they are part of the speakers' representations of those constructions. At least I want to find out to what extent they are synchronically motivated for speakers. I think this is an important and major thing that we should attempt to do. Surely, these categories and principles, which are iconically reflected more or less transparently in different aspects of the constructions, are also mixed with other more or less arbitrary aspects, the result of different constraints and extensions added to these constructions throughout their history. All that has to be described too, and not swept under the rug just because we don't understand it or dismiss it as uninteresting just because we cannot understand it.. So, for instance, I do not believe that we can describe (in any meaningful way) the English passive construction, or the dative shift construction, left dislocation, right dislocation, do-support, inversions of different types (from canonical order), question constructions, and a great number of other constructions, all major constructions, all constructions which in which discourse pragmatics properties and roles are involved, without the function (discourse-pragmatics) of these constructions and the elements of these constructions playing a central part in those descriptions. The passive construction, for instance, does not exist somewhere in some real or ideal grammatical realm and then it is put to some arbitrary use because it happens to be there, which is how I feel that some linguists approach this construction. The passive cosntruction exists to perform a function, or a set of functions in different contexts, and it has the form it does to a great extent because of the functions that it is designed to express. I believe that that is central to the construction, and not an ancillary issue which can be left for other investigators to worry about. When we describe a construction we have to describe the details of the form *and* semantic and pragmatic charactersitics of the constructions, the patterns of use, and so on and so forth, and along with all these the functions of the constructions and the possible reasons for their form, whether they are only diachronic or partially or fully synchronic as well. I just don't see how we can possibly ignore all these things while we are describing some aspect of a language. And we had better look at other languages along the way to see how the functions performed by the passive constructions in English are performed in them. And if they don't have an equivalent construction to the passive construction then we should try to figure out why. And if there is a passive-like construction and it's used differently, we should figure out how they are used differently and attempt to understand why. What parts of those languages system pick up the slack, and so on and so forth. Here I feel compelled to bring up an analogy from another science and I hope I won't be unduly chastised for my boldness and my ignorance. I just can't imagine that a biologist, for example, would attempt to describe a particular organ in some organism without at the same time attempting to understand its function (what it's for), how it may have gotten to be the way it is, what it does, how it does it, how it interacts with other organs of the body, and how it compares with the way other organisms perform those functions. Surely a lot of descriptive work will have to be done before the organ is fully understood, or understood as well as it can be understood, but I doubt that functional considerations will be ignored during the descriptive stage, or, even worse, completely ignored as unworthy of study and uninsteresting. Surely we'll come across things such as the appendix which doesn't seem to have a function (though it may have at one time), surely we'll come across things that we don't understand, and things that we will never understand, but how can we even dismiss the search for understanding from the start? I think I'll stop here. I do tend to get carried away. I am just trying to understand. If I got it all wrong, or some of it wrong, I want to know. Best, Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Eguzkia nora, zapiak hara "Where the sun is, that's where you should hang your clothes." From edith at CSD.UWM.EDU Sat Jan 18 20:22:24 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Sat, 18 Jan 1997 14:22:24 -0600 Subject: LT Message-ID: "LINGUSTIC TYPOLOGY" - a new journal The first issue of _Linguistic Typology_ is about to appear at Mouton de Gruyter. It contains articles by Scott DeLancey, Simon Kirby, Frans Plank & Wolfgang Schellinger as well as reviews by Bernard Comrie, Edith Moravcsik, and Michael Noonan. _LT_ - the publication of the Association for Linguistic Typology (ALT) - will be published in three issues per year with a total of approximately 400 pages. Submissions for subsequent volumes are encouraged. Studies of particular parameters or clusters of parameters of typological variation, papers on the theory and methodology of typology, as well as brief reports on typological implications, language or language family profiles, topical bibliographies, and items on the history of typology are all welcome. For subscription, submission, and further information, please contact the Editor-in-Chief, Frans Plank. (MAIL: Sprachwissenschaft, Universitaet Konstanz, Postfach 5560 D175, D-78434 Konstanz, Germany; FAX: 49-7531-882741; E-MAIL: frans.plank at uni-konstanz.de) From edith at CSD.UWM.EDU Sat Jan 18 20:09:49 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Sat, 18 Jan 1997 14:09:49 -0600 Subject: call for papers, ALT Message-ID: ALT II - CALL FOR PAPERS Abstract are being invited for the second meeting of the Association for Linguistic Typology (ALT II), to be held at the University of Oregon, Eugene, from September 11 to September 14, 1997 (Thursday through Sunday). Given that this will be the first meeting of ALT in the US, the membership requirement for presenters has been waived. Please feel free to send in an abstract regardless of whether you are a member of ALT or not. Please direct SIX copies of a one-page abstract to the chair of the program committee, Prof. Masayoshi Shibatani (address below), to reach him no later than MARCH 1, 1997. A second page (six copies) may be attached to the abstract listing data. E-mail submissions are also accepted. The program committee will, by May 1, 1997, convey its decision to those submitting abstracts. Each abstract should include the author's (or authors') name and mailing address (just one mailing address for multiple authors) including telephone, fax, and e-mail address as available. Each abstract should specify the amount of time requested for the presentation, including discussion, which may be 30, 45, or 60 minutes. You may also submit abstracts for symposia, in which case give the names of participants and the amount of time requested (which may, of course, exceed 60 minutes). Address for mailing abstracts: Masayoshi Shibatani Faculty of Letters, Kobe University 1-1, Rokkodai-cho, Nada-ku Kobe 657, Japan E-mail: matt at icluna.kobe-u.ac.jp From jaske at ABACUS.BATES.EDU Sat Jan 18 20:16:10 1997 From: jaske at ABACUS.BATES.EDU (Jon Aske) Date: Sat, 18 Jan 1997 15:16:10 -0500 Subject: form without meaning Message-ID: I wanted to add something to what I said yesterday. I have received one private response to that posting, in which Edith's rosary analogy was claimed to be instrumental in explaining the need to separate form from function, and I would like to explain why I do not think that that analogy is valid. If someone unfamiliar with rosaries came across one, s/he could describe its form, but would be missing a big part of the picture if the description omitted the function of the rosary. It would be a very limited description, one which we would accept if we had no other choice, but not one that I would be satisfied with. Furthermore, and this is crucial, the rosary, as an artifact, has a formal (physical) existence in the world apart from its function, but I don't think that linguistic constructions do. I believer that constructions exist in the *minds* of speakers and that they are learned and represented as forms with functions attached to them. Thus, to the extent that we are trying to describe linguistic 'competence' (in addition to 'performance' and the relation between the two) I think that we have to describe constructions in their totality. I do not believe, for instance, that in Basque, and in other languages in which the order of constituents in asserted clauses depends primarily on pragmatic characteristics (roles and statuses) of the ideas represented by those constituents, one can describe such ordering without resorting to those pragmatic categories and statuses. I believe that this has been recognized even by formalists, who have resorted to (formal?, functional?) categories such as [+FOCUS] or [+TENSE] or [+INFL] to account for the facts (that is what David was talking about the other day). That is a significant step forward I believe, but the way it is implemented seems to me to be more of an attempt to salvage a faulty model of linguistic units/systems than anything else. One more thing. I believe that all so-called structural units in language are cognitive units. What holds things together in any construction are semantic and pragmatic (informational) 'forces' 'binding' those elements together. What some people call a VP (a syntactic unit), for instance, exists only to the extent that there are semantic and pragmatic reasons for holding some elements of the clause together, while excluding others (the clause itself is a cognitive unit par excellence). But, unless we view those 'forces' as being functional (semantic and pragmatic) in nature and having a variable nature (non-referential objects, for instance, are bound to verbs more strongly than referential and topical ones), and as being overridden under certain circumstances (some clauses do not have topics), we will end up believing silly things, such as that some languages have VPs while others don't. And it just isn't that simple. Anyway, I'm going back to lurking too, just like everyone else. Unless someone gets me going again, of course. Have a great weekend, Jon -- Jon Aske jaske at abacus.bates.edu http://www.bates.edu/~jaske/ -- Hiru belarritan igaren hitz isila, orotan lasterka dabila "A secret that has been through three ears, won't remain a secret much longer." From TGIVON at OREGON.UOREGON.EDU Sat Jan 18 19:57:46 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Sat, 18 Jan 1997 11:57:46 -0800 Subject: etc. Message-ID: 1-17-97 Dear FUNKfriends, Now that the traffic has subsided somewhat, I want to take the opportunity and tell you how valuable I thought the last burst of discussion--thanks to Phil Bralich who, rather unintentially, I suspect, wound up starting it-- really. I saw it as a beautiful example of communal thinking I have always thought we started FUNKNET just for that (rather than for book and conference announcements, however useful those may be). So to me this discussion demonstrated that FUNKNET can serve its intended purpose -- even if it does it only once a year. The following comments are not intended thus as grabbing the last word, but rathe as part of this progressive refinement of our communal thinking. I thought Matt Dryer and Liz Bates defined the two poles of our discussion most succintly. What I would like to suggest here that the two poles of our practice of linguistics -- theory and methodology -- are indeed intimately connected. Matt suggested two "theses" of our approach to structure: (a) STRONG: "grammatical structure strongly correlates to semantic and pragmatic functions" (b) WEAKER: "grammatical structure exists" It might perhaps be useful to point out that **logically** a belief in A entails a belief in B. That is, if (a) is asserted, (b) must be presupposed. But, at the same breath, (c) must also be presupposed: (c) "semantic and pragmatic functions exist" Edith Moravcsik's latest comments indeed pursued this logic: If you believe in (a), then you must define **both** structure and function independently of each other. That is, in my terms --by different methods. Otherwise, all you are left with is *a tautology**. On the methodological end, I think Liz Bates (and Lise Menn) expressed our need for multiple methodology rather elegantly. But notice that, among other reasons, the logic of (b) and (c) above being presupposed by our strong belief in (a) already points at the need of multiple methodologies. We obviously need methods that probe into structure QUA structure. And the traditional -- Bloomfieldian, Chomskian -- methods of analyzing clause structure and morphology come in handy precisely for this reason. Indeed, I cannot imagine studying and describing the grammar of a new language I work on **without** such methods. Have you tried recently to go **directly** to studying discourse-pragmatic functions lately? And are your results yielding form-function correlations? For people like Fritz Newmeyer and Dave Pesetsky, whose contribution to our discussion was truly valuable, the terrain might look like this (and do forgive me for the hypothetical nature of (1)-(4) below): (1) We certainly see some correlations of the (a) type; (2) But, they are either sporadic or never 100%. (3) Therefore, to be really rigorous and not go on a (frail) limb, we cannot abide by the strong assertion (a); we will therefore confine our investigation of syntax to what is obvious -- obvious from using **only** the traditional clause-level methodology. (4) So, we will only describe structures, and develope and independent theory of syntactic structures. Now, notice that the vast majority of communicative functions do not reveal themselves, in any obvious/intuitive way, if you confine yourself to the traditional methodology. i.e. to the study of isolated clauses outside their discourse context. So much of the doubt expressed by Fritz and David about the partiality and non-systematicity of form- function pairing (i.e. Matt's principle (a)) must indeed be traced to their reluc- tance to go beyond the traditional clause-level methodology. This is in no way a **logical** necessity, but rather a pragmatic methodological consequence. Just as you cannot get at structure without the appropriate methods, so you cannot get at communicative-pragmatic functions without the appropriate methothology; that is, without studying what grammar does in actual communication. What has always baffeled me, suppose--ever since reading and heartly approving of Chomsky's (and Postal/Katz' and Fillmore') drift, between 1962 and 1965, to **semantically-relevant** (and thus more astract) deep "syntactic" structure--is the seeming reluctance of generative linguists to take the rather obvious next plunge. Propositional semantics was licensed by Aspects (1965) as being strongly correlated to syntax, i.e. to "deep structure". So why not take the obvious next plunge and admit that the "stylistic transformations", those Joe Emonds characterized in his disser- tation as "root transformations", are just as relevant to syntax (and syntax relevant to them) as the "triggered" transformations (Joe's "structure- preserving" transformations)? In other words, if you've already opened the doors of syntax to semantics, why don't you open it further to pragmatics? Here I think is where, inadvertently, implicitly, methodology rears its sweet head. If you don't practice the methodology of looking for what syntactic structures do in communicative context, then pragmatic function remains rather invisible to you. You sense its existence, but it remains mysterious, unwieldy and highly suspec. You approach it with the same an inborn skepticism that Bloomfield and Carnap and the Positivists did, as "stylistic" intuition that cannot be captured **rigorously** by science. On reflection then, what we've got here is a fairly transparent case, leastwise to me, of the methodological tail has continuing to wag the theoretical dog. With apologies for the long-windedness, TG From edith at CSD.UWM.EDU Sat Jan 18 22:24:08 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Sat, 18 Jan 1997 16:24:08 -0600 Subject: call for papers, ALT Message-ID: Forwarded message: >>From edith Sat Jan 18 14:09:49 1997 From: Edith A Moravcsik Message-Id: <199701182009.OAA28743 at alpha1.csd.uwm.edu> Subject: call for papers, ALT To: edith, funknet at rice.edu, linguist at tamvm1.tamu.edu Date: Sat, 18 Jan 1997 14:09:49 -0600 (CST) X-Mailer: ELM [version 2.4 PL24alpha3] Content-Type: text ALT II - CALL FOR PAPERS Abstract are being invited for the second meeting of the Association for Linguistic Typology (ALT II), to be held at the University of Oregon, Eugene, from September 11 to September 14, 1997 (Thursday through Sunday). Given that this will be the first meeting of ALT in the US, the membership requirement for presenters has been waived. Please feel free to send in an abstract regardless of whether you are a member of ALT or not. Please direct SIX copies of a one-page abstract to the chair of the program committee, Prof. Masayoshi Shibatani (address below), to reach him no later than MARCH 1, 1997. A second page (six copies) may be attached to the abstract listing data. E-mail submissions are also accepted. The program committee will, by May 1, 1997, convey its decision to those submitting abstracts. Each abstract should include the author's (or authors') name and mailing address (just one mailing address for multiple authors) including telephone, fax, and e-mail address as available. Each abstract should specify the amount of time requested for the presentation, including discussion, which may be 30, 45, or 60 minutes. You may also submit abstracts for symposia, in which case give the names of participants and the amount of time requested (which may, of course, exceed 60 minutes). Address for mailing abstracts: Masayoshi Shibatani Faculty of Letters, Kobe University 1-1, Rokkodai-cho, Nada-ku Kobe 657, Japan E-mail: matt at icluna.kobe-u.ac.jp -- ************************************************************************ Edith A. Moravcsik Department of Linguistics University of Wisconsin-Milwaukee Milwaukee, WI 53201-0413 USA E-mail: edith at csd.uwm.edu Telephone: (414) 229-6794 /office/ (414) 332-0141 /home/ Fax: (414) 229-6258 From M.Durie at LINGUISTICS.UNIMELB.EDU.AU Sun Jan 19 05:33:25 1997 From: M.Durie at LINGUISTICS.UNIMELB.EDU.AU (Mark Durie) Date: Sun, 19 Jan 1997 16:33:25 +1100 Subject: form without meaning In-Reply-To: <32E04AC4.831@abacus.bates.edu> Message-ID: John Aske wrote: >Here I feel compelled to bring up an analogy from another science and I >hope I won't be unduly chastised for my boldness and my ignorance. I >just can't imagine that a biologist, for example, would attempt to >describe a particular organ in some organism without at the same time >attempting to understand its function (what it's for), how it may have >gotten to be the way it is, what it does, how it does it, how it >interacts with other organs of the body, and how it compares with the >way other organisms perform those functions. Yes they have done this, but still acknowledging the difference between the two kinds of task. Anatomy is the study of structure. Physiology is the study of function. The history of medicine shows that quite different methods and methodological difficulties were involved to explore the two areas. (The anatomists had the problem of getting enough bodies to dissect, and the physiologists had to get used to the idea of experimentation.) But it also shows that advances in understandings of structure and function influence and help advance each other in very complex ways that are hard to plan for or categorize. Mark Durie ------------------------------------ From: Mark Durie Department of Linguistics and Applied Linguistics University of Melbourne Parkville 3052 Hm (03) 9380-5247 Wk (03) 9344-5191 Fax (03) 9349-4326 M.Durie at linguistics.unimelb.edu.au http://www.arts.unimelb.edu.au/Dept/LALX/staff/durie.html From chafe at HUMANITAS.UCSB.EDU Mon Jan 20 04:47:23 1997 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Sun, 19 Jan 1997 20:47:23 -0800 Subject: History Message-ID: Maybe we're about done with this for a while at least, or maybe not, but I can't help thinking that a little historical perspective wouldn't hurt. Just as the synchronic state of a language can't be fully understood without reference to its history, the state of linguistics can profit from a little historical understanding too. Dont worry; I won't go on for very long. By historical accident I happened to be educated in linguistics while it was still dominated almost completely by "post-Bloomfieldians". I was taught some things I've always found useful, but two things I fairly quickly decided were wrong. One was anti-mentalism--the rejection of the mind that came from behaviorism, as colored by logical positivism and an excessively narrow view of what it meant to be "scientific". The other was the view that, even though everyone might admit that language is somehow related to all of human experience (cognitive, emotional, social, historical), there was an isolable part of it that could be studied all by itself, so linguists could happily be exempted from worrying about all the rest. That view had been heavily promoted by Bloomfield, with bows toward Saussure. Shortly after that there was a change in attitude concerning the mind, but that was about all, and the results were curious. There was no change in the view that some part of language could be isolated for scientific study, apart from all the rest. Linguistics came to be dominated by a search for the nature of that isolable thing within the mind, which had to be innate because its connections with everything else were held to be negligible. One unfortunate result was an all- consuming interest in universals with a corresponding disregard for ways in which languages differ, whereas the Bloomfieldians, much to their credit, had always been interested in those differences. Most linguists nowadays can hardly imagine the sneering that was directed in the early 1960s toward those who practiced "mere description". Much was made of "explanatory adequacy", but instead of working toward an understanding of language as shaped by socio-cognitive and historical forces, explanations took the form of tree diagrams. Tree diagrams that mysteriously changed their shape according to rules that also had no basis either in how people talk or in language history. The latest exchange makes we wonder if we haven't come full circle. Some would apparently now like to believe that it's OK to restrict one's interest to the isolable part, thereby accomplishing one's own kind of descriptive adequacy, while leaving more encompassing understandings to those who might be interested in that sort of thing, and who, quite ironically, might thereby succeed in achieving an explanatory adequacy very different from the kind proposed in the 1960s. But is it even possible to restrict oneself in that way? A lot of the discussion has revolved around that question. My own view is that the stuff of which syntax is made--its elements and the constructions into which they enter--is either functional stuff (and a great deal of it most certainly is), or consists of the fossilized (or partially fossilized) remains of things that were functional at an earlier time. I think that's what Jon Aske has been saying, and certainly his experiences accord with my own. If we are right, then a great deal of effort is being expended on the wrong thing, something not rare in human affairs, but something that's regrettable at this special moment in human history when most of the world's languages are about to disappear. Wally Chafe From dever at VERB.LINGUIST.PITT.EDU Mon Jan 20 11:39:10 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 20 Jan 1997 06:39:10 -0500 Subject: etc. In-Reply-To: <01IECVSCJTHU8ZKMY0@OREGON.UOREGON.EDU> Message-ID: Tom, Nice posting. But there are strong empirical and conceptual reasons for separating discourse and sentence-level syntax that have nothing to do with vestigial methodology from the 60s. If people want to pursue this topic, I would be happy to state my reasons. On the other hand, I need to state immediately that noone who ignores discourse and the core role that it plays in language and culture will go very far in understanding Language as a whole. But (sounding like Fiddler on the Roof dialogue) on the other hand there are many reasons for thinking that one can understand a great deal of grammar without understanding Language. -- Dan ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From dever at VERB.LINGUIST.PITT.EDU Mon Jan 20 11:50:03 1997 From: dever at VERB.LINGUIST.PITT.EDU (Daniel L. Everett) Date: Mon, 20 Jan 1997 06:50:03 -0500 Subject: History In-Reply-To: Message-ID: Wally, I do not think that we have come full circle. I think we have gone in a straight line. Chomskian research has continued the Bloomfieldian practice (in fact we can drop Bloomfieldian and just say scientific) of isolating certain components of its empirical domain (language) for study, i.e. grammar. All functionalists do this too - nobody studies everything or ever plans to have a theory of everything. As Mark Durie points out, a priori any field runs the risk of committing egregious errors by slicing the pie the wrong way. The idea is that the robustness of the research results has got to be the guiding light, as it were. Those of us who divide the sentence from the discourse as a research strategy very often do this awake and consciously, believing that we know why we are doing it, and not simply because we are unaware of the history or alternative possibilities. The issue here is neither methodological nor historical, but ontological and empirical. The *only* way to really evaluate alternative research programs is in terms of the quality of their empirical production. Arguments must revolve around the empirical, case-by-case. That said, 'quality' is clearly subjective and at some point (which we are a long ways from), it will be like trying to convince each other that blue is prettier than red. -- Dan ****************************** ****************************** Dan Everett Department of Linguistics University of Pittsburgh 2816 CL Pittsburgh, PA 15260 Phone: 412-624-8101; Fax: 412-624-6130 http://www.linguistics.pitt.edu/~dever From vakarel at OREGON.UOREGON.EDU Tue Jan 21 07:58:34 1997 From: vakarel at OREGON.UOREGON.EDU (c. vakareliyska) Date: Mon, 20 Jan 1997 23:58:34 -0800 Subject: Second call for papers: national Slavic linguistics conference Message-ID: FIRST NORTHWEST CONFERENCE ON SLAVIC LINGUISTICS - May 17, 1997 Keynote speaker: Horace G. Lunt, Samuel Hazzard Cross Professor (emer.), Department of Slavic Languages and Literatures, Harvard University The first Northwest Conference on Slavic Linguistics, co-sponsored by the University of Oregon,the University of Washington, and the Oregon Humanities Center, will be held on Saturday, May 17, 1997, at the University of Oregon, in Eugene. The purpose of the conference is to provide a national forum devoted specifically to Slavic linguistics which includes all areas of theoretical linguistics and philology. A one-page paper abstract on any topic in theoretical Slavic linguistics or Slavic philology should be submitted by e-mail by **FEBRUARY 1, 1997** to James Augerot (bigjim at u.washington.edu), Katarzyna Dziwirek (dziwirek at u.washington.edu), or Cynthia Vakareliyska (vakarel at oregon.uoregon.edu). If necessary, abstracts may be faxed or mailed to C. Vakareliyska, Department of Russian, University of Oregon, Eugene, OR 97403 (fax: (541) 346-1327). The Eugene airport serves direct flights from San Francisco, Seattle, Portland, Denver, and Salt Lake City. Hotel accommodations are available within walking distance of the university; information concerning hotel reservations will be posted in February. An optional excursion to Crater Lake is planned for the day following the conference. Conference registration fee: $25. For further information, contact C. Vakareliyska. From David_Tuggy at SIL.ORG Tue Jan 21 06:59:00 1997 From: David_Tuggy at SIL.ORG (David_Tuggy at SIL.ORG) Date: Tue, 21 Jan 1997 01:59:00 -0500 Subject: syntax/semantics, form/ meaning Message-ID: I played hookey for a few days from my post in lurkitude (I liked that word!), and then came back to find that the Funknet had come alive! Seems like a fair bit of heat and maybe even a decent amount of light has been generated. Not sure I've absorbed all the light (or even all the heat) that might do me good, but thought I'd add my two bits' worth anyway. The first part of this is trying to understand what others have said in this discussion: I'd appreciate being straightened out if I've gotten it wrong. Also, I'm sorry it runs on so long, but I think slowly ... Parsers were interesting, but the questions that most seem to have vexed people's souls are two closely related ones (if they're not in some sense the same one): Is syntax autonomous, particularly from semantics? Can/should we study linguistic forms (structures) without reference to their function or meaning? As usual, the answers depend on the definitions you give to the technical terms and the assumptions or presuppositions on which those definitions are based. After some initial sparks, Tom Giv?n and George Lakoff seem to have settled it between them that neither of them really denies that there is such a thing as syntax, and they are in agreement that it is not autonomous. I don't know anybody else who denies that syntax exists, but Fritz Newmeyer and others are ready to argue that it is autonomous. The form/function thing came up in that context, and just about everybody seems happy to say "Yes, there is form and there is function, and they are tied together, but not so closely as to make them indistinguishable." And everybody seems to agree that both are worth investigating, but they're not all in agreement on whether there's any point in (or possibility of) investigating form without investigating function at the same time. In other words, are syntax and form separate or autonomous enough that we can profitably treat them on their own terms without bringing in functional or semantic information? The crucial definitions and underlying assumptions include "syntax", "semantics", "form", "function", and "autonomous". We assume we know what each other means, and build arguments, or appeal to analogies with other sciences such as biology, all of which makes sense given our views of these key words but those analogies and arguments will not be as forceful to someone who means something different by them. A standard position (I think) is that "semantics" is "truth-conditional semantics", and syntax includes word-order or phrase-order information, constituency information, "grammatical class" information, "grammatical relations", gender and agreement phenomena, and such-like. All of these things are considered non-semantic in nature. In doing syntax you pay attention to semantics only enough to let you know if two structures (usually sentences) are synonymous (i.e. have the same truth conditions) or not, but basically you ignore it. As Dick Hudson put it, syntactic generalizations "refer to words, word-classes and syntactic relations, without mentioning meaning (or phonology)." So what does it mean for syntax to be "autonomous" from semantics? For Fritz it seems it is enough to prove that some generalizations about syntactic patterns follow from other syntactic patterns or primitives: "A[utonomous]S[yntax] holds that a central component of language is a *formal system*, that is, a system whose principles refer only to formal elements and that this system is responsible for capturing many (I stress 'many', not 'all') profound generalizations about grammatical patterning." Similarly for Dick Hudson "The generalisations that distinguish auxiliary and full verbs are `autonymous'[sic], in the sense that they refer to words, word-classes and syntactic relations, without mentioning meaning (or phonology)." For Tom Giv?n and some others it seems to be enough to prove that some generalizations about syntactic patterns follow from something other than syntactic patterns or primitives to show that syntax is *not* autonomous. Thus Tom says "Grammar is heavily motivated by semantic/communicative functions. But -- ...it never is o100% so. It acquires a CERTAIN DEGREE of its own independent life. This, however, does not mean 100% autonomy." Yet Tom agrees with George that syntax is non-autonomous. Wally Chafe's position is similar, I think (I loved this posting, Wally): "My own view is that the stuff of which syntax is made--its elements and the constructions into which they enter--is either functional stuff (and a great deal of it most certainly is), or consists of the fossilized (or partially fossilized) remains of things that were functional at an earlier time." The positions seem pretty close to me: they are separated by the definition of autonomy. For the AS people anything that's not explainable by semantics /pragmatics/ function/non-linguistic cognition/etc. proves autonomy: for the Giv?n-functionalists anything that is proves non-autonomy. Both agree that some important generalizations are "syntactic", not to be accounted for by semantics/pragmatics/ function/etc. Of course they differ as to how much is to be accounted for in which way, and in the importance they assign to what is accounted for in each way, but they seem to be playing basically the same game, or at least on the same field. I guess I'm agreeing with Nick Kibre: "Ultimately, it seems that the autonomous syntax and and functionalist/cognitive position are more edges of a continuum than strictly opposing viewpoints. Nearly veryone agrees that language is shaped both by innate cognitive mechanisms, at least partially specialized for linguistic function, and by the demands of usage; our only point of disagreement is how tightly the former constraints the range of possible systems, and how much regularity is due to the pressures of the latter." And form? I think both these camps basically agree that what is not predictable from semantics/pragmatics/function/cognition/etc. is formal. Whatever is ossified, fossilized, so just because that's the way speakers of this language do it and not because of some semantic/pragmatic/etc. necessity, is formal. And syntax is the major domain of such formality, I think, for both. (The lexicon too, I suppose, though I'm not as sure.) Or am I oversimplifying? Matthew Dryer wrote that "autonomous syntax" means for some (1) something about innateness (which I don't want to talk about), and "(2) one can explain syntactic facts in terms of syntactic notions (3) syntax/grammar exists (although that too can mean different things)" "Arguments for the autonomy of syntax," Dryer continues, "(such as some offered in print by Fritz Newmeyer) often involve no more than arguments for (3). For me (and I assume that this was what both George and Tom meant), rejecting autonomy of syntax involves rejecting (1) and (2)." Matthew, are you saying that no syntactic facts can be explained in terms of syntactic notions, or that not all can, or what? And although I hate to bring any division between them when they've made up so nicely, I'm not sure at all that George and Tom both mean the same thing. It seems to me that the position that George alluded to and somewhat described, the one he and Ron Langacker share (and which it suits me to work from), is different from what's been said so far. Under this view, syntax is non-autonomous in a much more radical sense. But a rethinking, i.e. different definitions, of the basic concepts, is necessary. Semantics is not limited to what truth-conditions show us, but extends to virtually any kind of cognitive activity: it includes all kinds of "imagery" and "construal" factors, degrees of prominence of parts of a concept vis-a-vis each other or of one concept vis-a-vis another, attitudinal (emotional) information, etc. Significantly, it includes relations of one concept to another (e.g. the relation of an actor to the act he performs can be semantic). Semantics and pragmatics, to the extent that it is useful to distinguish them, differ only in degree, not in kind, and for most purposes what others call "pragmatics" is included as part of semantics. Categories are not expected to be classical, airtight, all-or-nothing compartments, but rather are organized around prototypes, with fuzzy borders and surprising extensions. Any concept of these sorts becomes *linguistic* (i.e. part of an established *language*) by being cognitively "entrenched", i.e. routinized, ossified, fossilized, in some degree, and "conventionalized", i.e. shared and known to be shared by a community of speakers. Although I don't remember George or Ron using the word in this way, "formalization" is what the other folk seem to be calling this process, and I think we can usefully make the connection. "Formalization", then, is a function, and a very important one: we couldn't talk coherently to ourselves or to others without it, and all linguistic structures undergo it to some degree. "Formalization" in this system does not necessarily mean arbitrariness, however. What is entrenched typically is entrenched because it makes sense, it functions well. In general there is claimed to be a gradation between the fully arbitrary and the totally predictable, with most language phenomena in between, in the "motivated" or "reasonable" part of the cline. Some linguistic patterns may in fact hold *only* because "that's the way we do it"; most also have some element of "we do it because ...". In either case, they are established, "formal" patterns. As Ron puts it, it is important to distinguish between what kinds of structures there are (only semantic, phonological, and symbolic, says he), and the predictability of their behavior. Few if any are fully predictable apart from "that's just the way we do it"; even if that is the only motivation left, the structures do not cease to be semantic, phonological, or symbolic. Phonology, under this framework, is also a subpart of conventionalized, entrenched cognition, namely the part that deals with the motor and auditory routines that constitute our pronunciations and perceptions or linguistic sounds. Included in it are the timing relations that constitute the order of production of phonological structures. (Note that much non-linguistic cognition can also become entrenched and even conventionalized: e.g. the motor routines of driving a stick-shift car constitute a quite complex and flexible system that is, in this sense, "formal".) Lexical items, reasonably uncontroversially, are claimed to have a semantic structure (i.e. for L&L a non-classical category of entrenched conventionalized cognitive routines) which is (again by an entrenched, conventionalized cognitive routine) linked in a symbolization relationship to a phonological structure. These structures, and their symbolic relation, are "formal" from the start. So proving that something is "formal" doesn't make it non-semantic or non-phonological, requiring us to make a third category of the "syntactic". And in fact, the claim is that syntactic constructions, of whatever degree of complexity or abstraction, are basically of the same ilk as simple lexical pairings. They differ from them only in degree, not in kind; consisting, as lexical items do, in the symbolic pairing of a semantic with a phonological structure. Syntax thus is non-autonomous not in that some of it can be accounted for by non-formalized stuff: it is non-autonomous in that all of it (so the claim is) can be accounted for by the same sort of formalized cognition that constitutes semantics and phonology. Wally Chafe's words come back to mind: "the stuff of which syntax is made--its elements and the constructions into which they enter--is either functional stuff (and a great deal of it most certainly is), or consists of the fossilized (or partially fossilized) remains of things that were functional at an earlier time." George and Ron would say "it is semantic, phonological or symbolic stuff. In some of it some functional motivation besides pure convention can be seen: some of it may indeed be fossilized to the point where that is about all that is left, but it is still the same kind of thing." The arguments I remember being given for the non-semantic nature of the syntactic "stuff" were based on two ideas: the notion that semantics is only truth-value stuff (and in fact pretty nearly only real-world referential stuff), and the notion of classical, all-or-nothing categories. Thus most of us were told in our first-year grammar or syntax courses, "Yes, you might think there was a semantic basis for, say the 'noun' category, and indeed a word designating a person, place or thing usually is a noun. But some nouns name actions, and so the definitions don't work." Since semantics (i.e. truth-conditional semantics) can't predict 100% of the cases, the category is non-semantic: what else could it be but syntactic? I.e. the only thing that will do to define it is the way it interacts with its linguistic neighbors." But if categories needn't be all-or-nothing, the argument fails. For surely, if you allow prototype-based categorization, the category 'noun' prototypically does in fact mean something pretty near to the traditional "person, place or thing", and other cases, even those denoting actions, though correctly seen as non-prototypical, are pretty clearly relatable to this prototype. The same works for 'verb' and other categories (See e.g. Ron's 1987 "Nouns and Verbs" in Lg. 63.53-94; he actually goes beyond this and proposes schematic concepts that do pretty well at covering all the cases, claiming e.g. that we conceptualize an action differently when we refer to it via a noun or a verb). The relations between nouns and verbs fit in beautifully, since the way something interacts with its linguistic neighbors is in fact part of cognition, and to the extent that that those interactions are entrenched and conventionalized, they are linguistic, i.e. semantic, phonological, or symbolically linking the two. Yes, a category such as 'subject' doesn't always mean 'agent', but if you include volitional agents as prototypical subjects, and organize the category around them, it works out quite satisfactorily. Other supposedly "syntactic" concepts fit in the framework just as well. The phonological nature of these syntactic structures bears comment. I don't remember ever being given an argument for the non-phonological nature of syntax. And yet, why is word order (or phrase or clause order, or morpheme order) any less a *phonological* fact than phoneme order? Under this perspective it is a phonological fact. And in any particular construction that phonological order relationship symbolizes a semantic relationship (e.g. that of a verb to its subject or object.) Both semantic and phonological structures can be so schematic (underspecified) that they are not useful alone, but are useful for specifying patterns, yet those patterns differ only in degree, not in kind, from fully-specified structures. Anyway, it seems to me that Langacker and Lakoff's position on this is different from the others I've been reading. It does not deny (in fact it insists) that linguistic structures are formal(ized), but it says that is true of everything, not just syntax. Certainly it allows for many "profound generalizations" to be accounted for by the syntactic relations of one structure to another, but it insists that those syntactic relations themselves are non-autonomous in the sense that they are the same kind of thing as the rest of language, i.e. they consist of conventionalized, entrenched, "formalized" cognitive patterns, either phonological (related to speech sounds) or semantic. Dan Everett says that we should compare models by their "empirical production". "Arguments must revolve around the empirical, case-by-case." I guess he means by their success in dealing with real language data. If he does, I think I mostly agree. I've liked Ron's and George's model because it helps me deal with my real language data where the others I've tried didn't. But it helps me to try to sort out (as I've tried to here), who's claiming what, before I can evaluate what particular data really prove with regard to each model. David Tuggy From edith at CSD.UWM.EDU Wed Jan 22 20:06:50 1997 From: edith at CSD.UWM.EDU (Edith A Moravcsik) Date: Wed, 22 Jan 1997 14:06:50 -0600 Subject: form Message-ID: =====> To JON ASKE: Thanks for your two responses (Friday, January 17 and Saturday, January 18). I think we are in agreement on the basic fact that a full account of grammar includes consideration of both form and function. Where we disagree is that I believe within this total endeavor there is a distinct step devoted to the study of form independent of meaning, while you are questioning this. -- ************************************************************************ Edith A. Moravcsik Department of Linguistics University of Wisconsin-Milwaukee Milwaukee, WI 53201-0413 USA E-mail: edith at csd.uwm.edu Telephone: (414) 229-6794 /office/ (414) 332-0141 /home/ Fax: (414) 229-6258 From bates at CRL.UCSD.EDU Thu Jan 23 06:08:16 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Wed, 22 Jan 1997 22:08:16 -0800 Subject: form Message-ID: I just thought I would include, for the edification of all on this list, a quote that I just read from this week's Newsweek Magazine, from our very own Fritz Newmeyer. Anyone care to add this tidbit to the running discussion? -liz bates The new millennium will also bring the discovery of genes for specialized bits of language. Already, researchers have found a genetic mutation that shows up in an inability to put suffixes onto words: people who carry the gene cannot add "-s" or "-er" or _ed" to words, explains [linguistic Fritz] Newmeyer [of the University of Washington]. "In the next century we will locate other aspects of language in the genes," he believes. Could a gene for the subjunctive be far behind? Next time you don't know whether it's "if she was" or "if she were," you'll be able to blame your DNA. from Sharon Begley, "Uncovering secrets, big and small". In Beyond 2000: America in the 21st Century. Newsweek, January 17, 1997, pp 63-64. From fjn at U.WASHINGTON.EDU Thu Jan 23 15:56:54 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Thu, 23 Jan 1997 07:56:54 -0800 Subject: form In-Reply-To: <199701230608.WAA28417@crl.UCSD.EDU> Message-ID: I hope it is clear that the quote stops where the quote stops. Not only did I NOT speculate that there might be a 'gene for the subjunctive', I patiently explained to the reporter that such would be utterly implausible and was careful (I thought) to dissociate any claims about a genetic basis for grammar from discussions of prescriptivism, Ebonics, and whatever else the public might associate with the notion 'grammar'. But, alas,... --fritz On Wed, 22 Jan 1997, Elizabeth Bates wrote: > I just thought I would include, for the edification of all on this > list, a quote that I just read from this week's Newsweek Magazine, > from our very own Fritz Newmeyer. Anyone care to add this tidbit > to the running discussion? -liz bates > > > The new millennium will also bring the discovery of genes for > specialized bits of language. Already, researchers have found > a genetic mutation that shows up in an inability to put suffixes > onto words: people who carry the gene cannot add "-s" or "-er" > or _ed" to words, explains [linguistic Fritz] Newmeyer [of the > University of Washington]. "In the next century we will locate > other aspects of language in the genes," he believes. Could a gene > for the subjunctive be far behind? Next time you don't know whether > it's "if she was" or "if she were," you'll be able to blame your DNA. > > from Sharon Begley, "Uncovering secrets, big and small". In Beyond 2000: > America in the 21st Century. Newsweek, January 17, 1997, pp 63-64. > From kemmer at RUF.RICE.EDU Thu Jan 23 17:24:45 1997 From: kemmer at RUF.RICE.EDU (Suzanne E Kemmer) Date: Thu, 23 Jan 1997 11:24:45 -0600 Subject: Graduate fellowships at Rice Message-ID: The Linguistics Department at Rice University (home of Funknet!) encourages applications from well-qualified students for admission to our Ph.D. program in Linguistics for 1997-98. The Ph.D. program at Rice emphasizes the study of language use, the relation of language and mind, and functional approaches to linguistic theory and description. Areas of intensive research activity in the department include cognitive/functional linguistics; in-depth study of the languages of North and South America, the Pacific, and Africa; language universals and typology; language change and grammaticalization studies; lexical semantics; corpus linguistics; computational modeling; neurolinguistics; discourse studies; and second language acquisition. The department offers support in the form of tuition waivers and fellowships to qualified doctoral students. Both U.S. and international applicants are admitted on the same basis, and financial aid is not restricted to U.S. citizens. Current doctoral candidates include not only U.S. students but also students from Australia, Brazil, China, Germany, and Korea. Prospective students of diverse linguistic backgrounds are encouraged to apply. Admission and fellowships are awarded on a competitive basis. Students enjoy access to departmental computer facilities; the department and university's excellent Linguistics collections (including a huge library of descriptive grammars); funds for conference travel; and photocopying accounts. Graduate housing next to campus is available; students can also take advantage of the affordable rental market in Houston, the nation's fourth largest city. With its many immigrant communities, the city provides not only wonderful opportunities for fieldwork, but also for (affordably) sampling a vast array of international cuisines. Applications are available from the following addresses: EMAIL: ukeie at ruf.rice.edu REGULAR MAIL: Ursula Keierleber, Coordinator Department of Linguistics Rice University 6100 Main St. Houston TX 77005-1892 TELEPHONE: (713) 527-6010 Graduate Record Examination scores must be received by the department as soon as possible. Two letters of recommendation from relevant faculty are also required. Applications should be received by February 1, 1997. For more information about the program, see the department's WEB PAGE at: http://www.ruf.rice.edu/~ling Subpages include: Department--the basic info about the orientation of the department People--Faculty, students, staff, visitors Activities--Research Projects, Funknet, Distinguished Speakers/Colloquium series, Biennial Symposia etc. Programs--Graduate and Undergraduate Degree Programs Courses--This year's course schedules From bates at CRL.UCSD.EDU Thu Jan 23 18:19:56 1997 From: bates at CRL.UCSD.EDU (Elizabeth Bates) Date: Thu, 23 Jan 1997 10:19:56 -0800 Subject: form Message-ID: But shall we assume (and I promise NOT to say "Dear Fritz...") that you DID endorse the claim that the genetic basis of grammatical morphology has been discovered? That claim has made the rounds for several years, it is all based on a premature report about the K Family in London, and the report is stunningly wrong. Faraneh Vargha-Khadem and her colleagues, who have studied this family for years and were in no way responsible for the original report (a letter to Nature by Myrna Gopnik) published a thorough study of the family in 1995 in the Proceedings of the National Academy of Sciences, showing that there is absolutely no dissociation between regular and irregular morphology, or (for that matter) between grammar and other aspects of language, because the affected members of the family are significantly worse than the unaffected members on a host of different languages tests and on a number of non-linguistic measures as well. They also have a serious form of buccal-facial apraxia, i.e. they have a hard time with complex movements of the mouth, so severe that some members of the family supplement their speech at home with a home signing system. A separate paper (not in the Proceedings) shows that they also have a hard time (relative to familial controls) with a finger-tapping task! Imagine the following tabloid headline: "Elton John and Lady Diana spent hot night together in Paris." You buy the paper, open up to page 17, and discover that they spent the night together with 400 other people at a party. Kind of changes the interpretation, no? In the same vein, the grammatical deficits displayed by the affected members of this family are part of a huge complex of deficits, in no way specific to grammar much less to specific aspects of morphology. But the rumor continues to be passed around....-liz From fjn at U.WASHINGTON.EDU Thu Jan 23 19:53:30 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Thu, 23 Jan 1997 11:53:30 -0800 Subject: form In-Reply-To: <199701231819.KAA03416@crl.UCSD.EDU> Message-ID: The Vargha-Khadem, et al. paper that Liz refers to is nothing less than scandalous. This 3 1/2 page (!) paper, which Liz calls 'thorough', was published in 1995, yet refers to no work by Myrna Gopnik and her group on the K family that was published after 1991. In that period, they carried out dozens of tests on the family that directly contradict the claims of Vargha-Khadem, et al. The slipperiest thing that V-K do is to imply that nongrammatical problems manifested by one (or some) of the affected family members are manifested by *all* of the affected family members, giving the illusion that there is a broad syndrome of problems associated with the inability to handle inflectional morphology. In fact, there is none. The low IQ scores for the affected members reported by V-K not only contradict the scores reported by Gopnik and her associates, but also contradict the scores published by *Varga-Khadem's own research group*. There is no explanation for this discrepancy; in fact, there is no evidence that the affected members of the family have statistically significantly lower IQs than the nonaffected members. The 'intelligibility' problems reported by V-K and repeated by Liz appear to be almost entirely a function of the testing situation. The V-K group brought the (uneducated working-class) family members into a laboratory and pointed bright lights and video cameras at them. Relaxed settings (party-like atmosphere in the subjects' homes) revealed vastly improved articulatory abilities and few of the other problems reported by V-K. Bill Labov taught us linguists decades ago about the importance of a nonthreatening environment if one wants to assess natural speech. Few psychologists, it would seem, have learned the lesson. Implications that an auditory processing deficit is responsible for the dysphasia cannot be correct. Affected members of the K family perform excellently on phoneme-recognition tasks, and, moreover, have no difficulty perceiving unstressed word-final segments that mimic the form of inflectional suffixes (e.g. the final alveolar in words like 'wand'). Furthermore, whatever deficit the affected family members might have in articulation, it could hardly explain why they make errors with suppletive past tenses ('was', 'went') and with irregular pasts, regardless of the sound that happens to occur in final position ('took', 'drove', 'got', 'swam', etc.). And, and Goad and Gopnik have pointed out, 'it is very hard to see how articulatory problems could prevent them from making correct grammaticality judgments or ratings which require them to just nod yes or no or to circle a number'. I could write much much more, but instead will refer you to an upcoming special issue of Journal of Neurolinguistics, in which this issue will be discussed in detail. --fritz On Thu, 23 Jan 1997, Elizabeth Bates wrote: > But shall we assume (and I promise NOT to say "Dear Fritz...") that you > DID endorse the claim that the genetic basis of grammatical morphology > has been discovered? That claim has made the rounds for several years, > it is all based on a premature report about the K Family in London, and > the report is stunningly wrong. Faraneh Vargha-Khadem and her colleagues, > who have studied this family for years and were in no way responsible > for the original report (a letter to Nature by Myrna Gopnik) published > a thorough study of the family in 1995 in the Proceedings of the National > Academy of Sciences, showing that there is absolutely no dissociation > between regular and irregular morphology, or (for that matter) between > grammar and other aspects of language, because the affected members of > the family are significantly worse than the unaffected members on a host > of different languages tests and on a number of non-linguistic measures > as well. They also have a serious form of buccal-facial apraxia, i.e. > they have a hard time with complex movements of the mouth, so severe > that some members of the family supplement their speech at home with > a home signing system. A separate paper (not in the Proceedings) shows > that they also have a hard time (relative to familial controls) with > a finger-tapping task! > > Imagine the following tabloid headline: "Elton John and Lady Diana > spent hot night together in Paris." You buy the paper, open up to > page 17, and discover that they spent the night together with 400 > other people at a party. Kind of changes the interpretation, no? > In the same vein, the grammatical deficits displayed by the affected > members of this family are part of a huge complex of deficits, in no > way specific to grammar much less to specific aspects of morphology. > But the rumor continues to be passed around....-liz > From john at RESEARCH.HAIFA.AC.IL Fri Jan 24 06:40:29 1997 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Fri, 24 Jan 1997 08:40:29 +0200 Subject: Newmeyer's quote Message-ID: I think it would be a good idea for linguists to just keep our mouths shut if the popular press comes looking for quotes about language and genetics. It's obvious, to me at least, that, unless we know for sure in advance that we will have 100% control over exactly what is printed, we are going to come across as having a racist agenda. We know that this isn't true, but after Newmeyer's quote in Newsweek, I think that many non-linguists are going to think either that linguists are racists or that linguistic research shows that black-white differences in speech are genetically-based. It isn't enough to say `Alas.' Let's think first and talk on the record later or not at all. We don't need to be so desparate to see ourselves in the news. John Myhill From ocls at IPA.NET Fri Jan 24 15:28:24 1997 From: ocls at IPA.NET (George Elgin, Suzette Haden Elgin) Date: Fri, 24 Jan 1997 09:28:24 -0600 Subject: newmeyer's quote Message-ID: On January 24th Dr. Myhill wrote: "I think it would be a good idea for linguists to just keep our mouths shut if the popular press comes looking for quotes about language and genetics. It's obvious, to me at least, that, unless we know for sure in advance that we will have 100% control over exactly what is printed, we are going to comeacross as having a racist agenda. We know that this isn't true, but after Newmeyer's quote in Newsweek, I think that many non-linguists are going to think either that linguists are racists or that linguistic research shows that black-white differences in speech are genetically-based. It isn't enough to say`Alas.' Let's think first and talk on the record later or not at all. We don't need to be so desparate to see ourselves in the news. John Myhill" I agree with most of what Dr. Myhill says here, and understand the parts with which I do not agree. However, I am much afraid that it's just not this simple. True, the media will grab whatever part of an interview seems to have the most "legs" and will use that, no matter how many warnings are given; true, much of the time there's no way to control what is printed. Even when the reporter has agreed to the interviewee's constraints, the editors/publishers often overrule that agreement and do whatever they think will move copies or raise ratings. And it's not that they're indifferent to the fact that what they're doing is dangerous, it's that they haven't the least *idea* that it is. That's all true. But the charge that linguists have a racist agenda is not the only image problem we have. The average level of accurate information about language and linguistics in the general public is at Flat Earth level, and I am not just talking about "the masses." In the current "ebonics" mess, for example, the ghetto children have plenty of excuses for *their* ignorance; the allegedly educated adults from every walk of life who are pontificating in the public press on the subject have none. I respect each and every linguist's individual right to respond to this problem of public ignorance with "So what? It's not my problem and it doesn't interest me." Or with "If I tried to do something about it, I'd be misquoted -- what's the use?" Or both. But I don't, personally, feel that way about it. That ignorance has serious real-world consequences; we're all paying for those consequences. Language is our science; it seems to me that linguists have some responsibility in this matter. Because I am of the opinion that it *is* my problem, I do many interviews every year. (Much of the time I am misquoted, to some degree; quite right.) At least half the interviews begin with someone (or several someones) saying to me, "I hate interviewing linguists. They're elitists, they look down on everybody who isn't a linguist, they can't even get along with each other, and they can't be bothered to speak English." Followed, often, by the ultimate insult: "You people are worse than *doctors*!" I will never forget a conference on bilingual education in the seventies where the then secretary of education -- allegedly an educated adult -- got up for the keynote address and announced that "the reason bilingual education has failed in the United States is because the linguists have refused to help." It is still the case, after all these years, that when I go into a school and people are told that I'm a linguist, they say one of two things: " I'm afraid to talk to you, because all linguists do is watch for people to make mistakes" or "I don't want anything to do with linguists -- they're responsible for the mess we're in." I got an email message last year in which an academic who'd been flamed on Linguist List for asking a question informed me that that was the last time *he* intended to open his mouth in front of "Your Linguistnesses." With all due respect, it seems to me that perhaps being desperate to see ourselves in the news -- after taking time to think carefully, as Dr. Myhill stipuates -- is not necessarily such a bad idea. Suzette Haden Elgin From lgarneau at HOTMAIL.COM Fri Jan 24 17:34:58 1997 From: lgarneau at HOTMAIL.COM (Luc Garneau) Date: Fri, 24 Jan 1997 17:34:58 -0000 Subject: agree with suzette haden elgin Message-ID: While I prefer not to comment to almost ANYONE on the whole ebonics issue (because most non-linguists don't really understand what ebonics entails anyway), I have had similar experiences with people regarding the popular concept of what a "linguist" is. When I mention to people that I have an MA in linguistics and teach English/Linguistics part-time at National-Louis University in Evanston, they assume I am a strict prescriptivist, and are very concerned about talking to me about things. Suzette's "I am afraid to talk to you -linguists just look for mistakes" (pardon my approximate quote) comment is one I have encountered more than once! When I explain that I am more of a descriptivist, then go on to explain the descriptive/prescriptive distinction, they go the opposite extreme and assume that I am a proponent of "bad grammar". It's tough sometimes, studying such a constantly changing subject. As to the responsibility of linguists...I will wait to see what others say before commenting! Luc Garneau Adjunct Instructor of English National-Louis University Evanston, Illinois e-mail: lgarneau at hotmail.com --------------------------------------------------------- Get Your *Web-Based* Free Email at http://www.hotmail.com --------------------------------------------------------- From john at RESEARCH.HAIFA.AC.IL Sun Jan 26 08:39:53 1997 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Sun, 26 Jan 1997 10:39:53 +0200 Subject: predictions Message-ID: Brian MacWhinney wrote: > >It seems to me that it is a free country and anyone can say anything that >they want, as long as it is not libelous. I'm sure that when Fritz was >being interviewed he told the reporter that linguists and psycholinguists >disagreed sharply on the interpretation of the genetic data. And probably >the reporter just decided to ignore his remarks on that issue. And >undoubtedly Fritz, like many of us who have been in a similar position, was >shocked to see how his story was reported. > First of all, I am not questioning FN's legal right to say what he said; I hope that I was not interpreted as suggesting that, and I assume that BW does not believe there are no non-legal bases for criticizing actions. Secondly, if your feeling is that it is so likely that you will be shocked at how your story is reported, maybe you better not say anything; if your purpose for cooperating with the interview is to give information to the wider public, and there is an expectation that things will get screwed up, why say anything? Leave this to people who are SERIOUS about interacting with non-linguists and will respond to misquotes NOT by throwing up their hands and saying 'I was misrepresented' but by devoting a major effort to correcting the situation, if necessary sacrificing their research agendas and position within the field for this purpose. John Myhill From TGIVON at OREGON.UOREGON.EDU Sun Jan 26 20:47:53 1997 From: TGIVON at OREGON.UOREGON.EDU (Tom Givon) Date: Sun, 26 Jan 1997 12:47:53 -0800 Subject: Chinese Linguistics Job Message-ID: From: IN%"tomlin at OREGON.UOREGON.EDU" 25-JAN-1997 12:52:23.70 To: IN%"tgivon at OREGON.UOREGON.EDU" CC: Subj: ad copy for AAS Newsletter Return-path: Received: from [128.223.36.229] (lhuff.uoregon.edu) by OREGON.UOREGON.EDU (PMDF V5.0-5 #18639) id <01IEMPQQYJ4W8ZJLW6 at OREGON.UOREGON.EDU> for tgivon at OREGON.UOREGON.EDU; Sat, 25 Jan 1997 12:52:22 -0800 (PST) Date: Sat, 25 Jan 1997 12:52:22 -0800 (PST) Date-warning: Date header was inserted by OREGON.UOREGON.EDU From: tomlin at OREGON.UOREGON.EDU (Russell S. Tomlin) Subject: ad copy for AAS Newsletter To: tgivon at OREGON.UOREGON.EDU Message-id: MIME-version: 1.0 Content-type: text/plain; charset="us-ascii" Content-transfer-encoding: 7BIT >Date: Thu, 23 Jan 1997 11:21:36 -0800 (PST) >From: Risa Haberman Tom-- Here, finally, is the announcement for EALL. Can you get it posted on both FunkNet and LINGUIST? Thanks. --Russ ________________________ Position Announcement: University of Oregon,Department of East Asian Languages & Literatures > >The Department of East Asian Languages and Literatures seeks a dynamic >scholar and teacher who is a specialist in the area of Chinese >sociolinguistics or cultural linguistics. We are hoping to build upon our >strengths with a candidate whose research agenda contributes to a program in >literary, theoretical, and cultural studies. > >Responsibilties: > > The candidate will teach undergraduate and graduate courses in Chinese >linguistics, cultural linguistics, and language pedagogy as well as managing >and directing the Chinese language program. The position also includes >coordinating and directing the fall term orientation for graduate teaching >fellows in Chinese, organizing the summer Chinese language program, and >active research and publication in the field of Chinese sociolinguistics or >cultural linguistics. > >Qualifications: > >Ph.D. or ABD in Chinese linguistics. Native or near-native fluency in >Mandarin Chinese and English, attested ability and recent experience in >directing and participating in a large undergraduate Chinese language >program, the ability to teach undergraduate and graduate courses in Chinese >linguistics, cultural linguistics, and language pedagogy, demonstrated >expertise and graduate work in foreign language education, and formal >graduate training in Chinese cultural studies. We will give priority to >candidates with demonstrated expertise and graduate work in foreign language >education, and formal graduate training in Chinese culture. Candidates with >a well-developed research portfolio and direction are encouraged to apply. > >Applications due by: February 1, 1997 >Send letter of application, vita, and three letters of reference to: Chinese >Search Committee, Dept. of East Asian Languages and Literatures, University of >Oregon, Eugene, OR 97403. > > From fjn at U.WASHINGTON.EDU Mon Jan 27 17:03:09 1997 From: fjn at U.WASHINGTON.EDU (Frederick Newmeyer) Date: Mon, 27 Jan 1997 09:03:09 -0800 Subject: Message from Myrna Gopnik Message-ID: Dear Funknet subscribers, Myrna Gopnik has read the exchange in Funknet regarding the 'K family' and the general question of genetic dysphasia, and was kind enough to ask me to forward to you the following message: --fritz ---------- Forwarded message ---------- Date: Mon, 27 Jan 1997 12:35:42 EST5EDT From: GOPNIK at LANGS.Lan.McGill.CA To: fjn at u.washington.edu Subject: reply Over the last few years SLI has become a hot topic because it may have the potential to tell us something about the biological basis of language. In a recent exchange Liz Bates has raised some questions about this research. I am sorry to say that comments about this research have often generated more heat than light. If we are really interested in the science of it all then it is important to get the issues out on the table and see which ones we can agree about, which ones are still outstanding and how we could resolve them. So this note is intended not so much to cite data as it is to at least make a stab at clarifying some of the issues. (I will not be able to resist citing a little data and I would be glad to respond to any request for more details.) Our research program over the years has been clear: start with broad ranging, linguistically significant tests; examine the results; construct linguistically sound hypotheses; design new, hard tests, which sometimes require looking at new languages; look at the results; refine the hypotheses and start all over again. And it has worked. Bates appears to fault us for not always using standardized tests, but those tests are useless for addressing new hypotheses. For example, the original data from English, Japanese and Greek told us that language impaired subjects had particular trouble with inflections like tense. The linguistic question was whether it was inflectional rules or morphological complexity that was the problem. Would they have as much difficulty in finding the root in a complex word as they clearly had in adding an inflection? There was no way to test this hypothesis in English, but Jenny Dalalakis pointed out that there was a way of testing it in Greek; you have to be able to extract roots out of inflected forms in order to construct new compounds or diminutives. No one had ever looked at this before so she had to construct totally new tests, find out if and when young children could do these tasks, and then try them out on impaired unilingual native speakers of Greek. Jenny Dalalakis's innovative work on compounds and diminutives in Greek has made it clear that the language impaired subjects have just as much difficulty with finding the root of a complex word as they have with adding an inflection. And they have these problems in nouns and adjectives and not just on verbs so the problem cannot be accounted for by agreement or optional infinitives. Had we taken the easy route and just stuck with standardized tests we would not have been able to address these linguistically significant questions. Testing new hypotheses, with new tests on new languages is a risky business, but it is the necessary path if you want to find out new truths and not merely confirm old models. The aim, after all, is not merely to tally up the numbers from tests, but to use these results to construct hypotheses about the internal grammar of the individual that is producing these results. From this point of view the contrast of "deviant" vs. "delay" is not easily interpretable. We know that at an early stage of language development children treat inflected words as if they were unanalyzed chunks and it looks like language impaired individuals do the same. But there are other huge differences in their grammars with respect to the lexicon, syntax and compensatory strategies. On what grounds can this one particular similarity lead us to say that their grammar is "delayed" and not "deviant" (especially since we know that this "delay" lasts at least until age 76)? After eight years, hundreds of tests, thousands of data points, and almost a hundred impaired subjects representing four different native languages (English, French, Greek and Japanese), we are convinced that the data converges to tell us that, among other things, the language impaired subjects cannot construct normal representations for grammatically complex words and they therefore cannot use rules which depend on the content of these representations. Let me give you just one example of the kind of cross-linguistic evidence that we have gathered. If a subject produces a form like "walked" that appears to be inflected, we cannot tell whether this form is grammatically complex in their mental lexicon or whether it is merely a single unanalyzed chunk. One of the ways we have studied this is to see if, given a novel word, the subject could produce complex novel forms. Ability to mark novel words grammatically In each of these tests the subjects were given a context, usually in pictures, which required that a grammatical rule be applied to a novel word: This pencil is weff. This pencil is even _____. % correct CONTROLS IMPAIRED PAST TENSE English (in England) 95.4 38.0 English (in Canada) 93.5 52.3 Greek 87.1 20.0 French 92.6 33.3 Japanese 89.1 37.0 PLURALS English (in England) 95.7 57.0 English (in Canada) 99.2 58.3 Greek 79.8 42.1 COMPARATIVES English (in England) 74 21 COMPOUNDS Japanese 80.5 20.2 Greek 93.6 12.8 DIMINUTIVES Greek 83.9 40.2 These data clearly and convincingly show that the language impaired subjects are significantly worse at producing complex forms from novel words than are unimpaired subjects for every grammatical task and for every language population that we have tested. (We have closets of data and drawers of papers about a wide range tests that we would be glad to send to anyone who is interested. N.B. we have already sent out over 200 copies of the McGill working papers that Bates refers to, so it is not exactly hard to get). Are we finished with our work? Not by a long shot. We have lots more questions still to be answered just about morphology and though we know that they have problems with prosody and syntax we do not understand the complete picture yet. There are more things in language impairment than are accounted for in our hypotheses. But we think that our program of research has made real progress and that eventually the truth will out. Bates once more raises two specific issues with respect to the K family: oral apraxia and low IQ. I have responded to them often before, but they are harder to put to rest than old Hamlet's ghost. I will try once more. There seems to be three different ways in which these issues have been used 1. to imply that I have misreported the facts about the K. family and therefore my work should not be trusted, 2. to claim that the apraxia (Fletcher, 1990) or the low IQ (Vargha-Khadem) are the direct cause of the language problems, 3. to argue that this is not primarily a disorder of language because the members of the K family have other problems. The first issue is easy to clarify. I am a linguist, therefore I depended on reports from other experts with respect to apraxia and IQ. A neurologist, Dr. T. Ashizawa, examined the oral movements of several of the language impaired members of the K. family in the standard way and reported that they had normal oral motor function. Vargha-Khadem herself reports that the signs of apraxia show up only when the subjects have to perform several oral activities at the same time. The original IQ tests which I reported were done by the school psychologists who, over several years, found that all of the children were in the normal range. Vargha-Khadem later retested them with a much wider range of newly normed tests and the numbers changed. Do they have apraxia? Do they have low performance IQ? It depends on what tests you use and what your criteria are. All we can conclude is that if you use different tests at different times with different criteria you can get different results -- there needs no Bates, my friends, come from California to tell us this. But what does it all mean? First apraxia. Two phonologists, G. Piggott and H. Goad, spent many hours with the K family and even more hours listening to and transcribing tapes. They conclude that the errors that they make when they talk cannot be accounted for by oral motor problems (this is all reported on in detail in the McGill Working Papers). In fact, these subjects regularly produce forms that are harder to articulate than the normal form. One example: in producing the plural forms for novel words they regularly do not use voicing assimilation, i.e. they say /wug-s/ not /wug-z/. Fletcher early on suggested that their problems with tense might be accounted for by the articulatory vulnerability of regular tense marking in English. We tested this hypothesis and it is simply false. We have reported on hundreds of pieces of data about their use of tense in a wide range of different tests; spontaneous speech, reading, writing and they show the same pattern of problems with tense across all of these tests. Bates, herself, now seems to grant that the linguistic data cannot be accounted for by any purported oral motor problems. So even if one grants that some members of the K. family may have oral apraxia it tells us nothing about their language problems. Let's get beyond the K family for a minute. This problem with tense is not unique to them. It shows up everywhere that we have looked: Ability to produce tense marking % correct English (Eng.) English (Can.) French Japanese Greek impaired 38.3 52.3 46.7 48.1 20.0 controls 91.7 93.5 96.4 97.9 87.1 Subjects were given items like: Everyday I walk to school. Just like everyday, yesterday I ______ . This task requires the subject to recognize that the temporal context specified in the second sentence requires a particular verb form. Language impaired subjects outside of the K family have similar problems with tense no matter how it is encoded in their native language: in a final stressed syllable as in French or in a three syllable element like "mashita" in Japanese The IQ scores tell us nothing about their problems with language either. Even if we credit the new K. numbers over the original numbers all they show is that the means for the two groups are different. But still a language impaired member of the K. family scores 112, while his unimpaired relative scores only 84. It is a truism in speech pathology that some individuals with language disorder have lowish performance IQs and others have very high scores, as high as 140. (I have even recently been told about a language impaired subject who is a rocket scientist.) And there are lots of cases of individuals with low performance IQ who do not have these kinds of problems with language. Performance IQ scores simply do not correlate with these patterns of language impairment. It is absolutely clear, from our own research as well as from other research, that individuals with language impairment sometimes have other problems ranging from dyslexia to spatial rotation to depression to schizophrenia to apraxias and agnosias, but none of these other specific deficits reliably occurs with the language disorder and there are many individuals that have one of these other problems without having any language disorder. Language impaired let us grant them then, and now remains that we find out the cause of this effect, or rather say the cause of this defect. The question is whether the language impairment that we see in these individuals comes from a separate and special "language faculty" that is out of order or whether some more general cognitive or perceptual processing system is not functioning and the purported "language" problems are merely a result of a breakdown in a much more general system. Do these other problems simply co-occur with this disorder or are they the proximate cause of the language problems? What mechanisms could make disorders co-occur? One possibility is the contiguity of the genes that code for the co-occurring disorders. For example, the most striking feature of individuals with William's syndrome is that their spatial reasoning is seriously impaired. It is also the case that many of these individuals have heart and joint problems (as do many individuals who do not have spatial problems). But no one seriously supposes that their heart and joint problems cause their spatial problems. What appears to be going on is that the gene that is implicated in William's syndrome is very close to the genes that code for elastin which build normal hearts and joints. If the mutation is large it hits several genes at the same time. Though there is now a flood of epidemiological studies and twin studies that all indicate that at least some cases of SLI are connected to some genetic factors, it is still a puzzle about what the exact pattern of inheritance is and what genes are involved--lots of groups are working on these problems including our team lead by Roberta Palmour. There will be a loud hurrah! when this is figured out. But since we do not yet know what the genes are we cannot tell if the co-occurring problems are accountable for by genetic contiguity. The natural next question is what direct effect do these genes have on the organism? The obvious answer seems to be that something goes wrong in the development of the brain. Elena Plante and her colleagues have found significant differences in brain structure associated with this disorder. Alan Evans, Terry Peters and Noor Kabani, part of our team at the Montreal Neurological Institute, are studying MRI data from our Canadian subjects and, though the analysis is still ongoing, preliminary data shows that our impaired subjects also have significant neuroanatomical anomalies (in press, Journal of Neurolinguistics). Tanya Gallagher and Ken Watkin, also of McGill, have just completed an ultra-sound study comparing brain development in a fetus with a positive family history of SLI to three fetuses with no family history of language impairment. These studies show that there are significant differences between the two groups in the pattern of brain development in the last trimester of pregnancy and that these differences involve those centers of the brain that have been implicated for language (in press, Journal of Neurolinguistics). So at least for now it looks like there are reasons to believe that at least some cases of developmental language impairment are inherited and involve neurological anomalies. It seems likely that a possible source of the wide variety of other problems that we see in some particular individuals with language impairment is that the neurological damage caused by this mutation may affect different parts of the brain in different individuals. But this is just a hunch. Lots more about the nature of the genetic and neurological patterns that are associated with this language impairment still has to be understood. We are working on it. So are lots of others. The question that concerns me as a linguist is what precisely and exactly is wrong with their language and, more importantly, is something wrong with their ability to acquire language itself or are their language problems just an epiphenomenon of some other, non-linguistic problem. The only way to answer this question is to look at what they do and how they do it and why they do it and then come up with an explanatory theory that accounts for these facts. That is what we have been doing and the data converge to tell us that the language faculty is directly affected in these subjects. From bates at CRL.UCSD.EDU Tue Jan 28 02:24:15 1997 From: bates at CRL.UCSD.EDU (Liz Bates) Date: Mon, 27 Jan 1997 18:24:15 -0800 Subject: Message from Myrna Gopnik Message-ID: At some point last week, I composed and tried to send a response to Fritz Newmeyer?s critique of Vargha-Khadem, but I never saw it on the net, and several friends have commented that they didn?t see it either. On top of that, I have had private inquiries from a number of Funknetters asking if I intended to respond, so I am feeling compelled to give it one more try. Please note that this is a response to Fritz?s message. I want to spend a little more time studying the longer account that he forwarded today from Myrna Gopnik before deciding whether I would have anything useful to contribute there, beyond what you will find below. Let me say first that I have been hesitant about this response, because I found Fritz?s message to be nothing short of astonishing, a cry of outrage that borders on an ad hominem attack. I don't want to add to the level of vitriol from which we had begun to retreat. Let me see if I can add some substance to this exchange, without insulting anyone. The general thrust of Fritz?s objections seems to revolve around the way that Vargha-Khadem et al. (henceforth V-K) conducted their research, which (he believes) led to the difference between their findings and those of Gopnik et al. Among other things, he insists that the differences between the affected and unaffected members of the K family that V-K observed were artifacts of the decision to test these people in a laboratory setting (with ?bright lights and cameras?). He is also particularly incensed about the use of IQ tests, implying that the very act of testing destroys the object of inquiry (a kind of linguistic Heisenberg principle). Starting with the IQ issue, I agree that the theoretical importance of IQ testing is a legitimate subject of debate. My personal view is that IQ tests put seventeen disparate skills together in a Waring Blender to yield a single number, so that most of the information that might be relevant is lost. However, because they have been in use for so many decades, IQ tests have become benchmark variables in neuropsychological research, i.e. background information that makes it easier to compare results across different laboratories and different populations. Indeed, it is difficult to get studies of impaired populations published in most of the major journals if you cannot provide this kind of background information. Whether or not we find them useful on scientific grounds, there is nothing particularly frightening about the content of IQ tests for anyone who has gone through a Western school system, or at least, no more frightening than any other situation in which a scientist or clinician is asking questions (which would presumably include any of Gopnik?s tests as well). IQ tests were developed originally for use with army recruits, i.e. with working class individuals in a literate society. In this regard, let us remember that the subjects of this debate are members of a working class family in London. They are not members of an illiterate Cargo Cult, never before exposed to paper or pencil (much less lights and cameras). They have all spent years in the public school system, and have all been examined by doctors and nurses in the public health system (the British system still makes this possible, even for working class families....). Vargha-Khadem?s laboratories are part of a research center allied with Great Ormond Street Hospital for Children. Having visited the laboratory myself, I can assure you that it is no more frightening that the pediatrician?s offices and public schools that these people have seen for years. Indeed, it is likely that the affected members of the family have, because of their affliction, spent considerably more time seeing doctors and other professionals than their unaffected relatives. If anything, this fact should have reduced the difference between the affected and unaffected family members in the V-K study. And as for the video cameras: I am told that Gopnik first became aware of the K family when she saw them in a short documentary on the BBC while visiting friends in London. Hence we may presume that the K family was accustomed to cameras before Gopnik ever had a chance to test them. Assuming for the moment that the very fact of IQ testing has not served as a poison pill, destroying all the information that accompanies it, these findings were only one part of a much broader battery of tests reported by V-K, including 13 different tests of language. Most of these are indeed standardized instruments. Contrary to rumor, I do *NOT* believe that standardized tests are the only route to truth, but they do provide a broad overview of individual abilities, in a context that permits comparison with a lot of baseline data. They therefore serve as a good beginning for a more detailed inquiry, designed around specific linguistic issues, such as the assessment of the ability to comprehend and produce regular and irregular verbs. In this regard, let us ask again why it is the case that Gopnik (1990) report a dissociation between regulars and irregulars while V-K et al. find absolutely no evidence for such a dissociation. Instead, the affected members performed reliably worse than the unaffected members on both regulars and irregulars, with no trend, not even a hint of a double dissociation. Why indeed! As it turns out, Gopnik?s original report in Nature was based on a very small number of items. The V-K report in the Proceedings of the National Academy used not a standardized test, but a large and representative battery of items designed specifically by Karolyn Patterson to assess the kind of linguistics and psycholinguistic issues that are central to the controversy about regulars and irregulars. It is likely, I think, that this difference in materials is responsible for the difference between the Gopnik and Vargha-Khadem findings. In fact, Ullman and Gopnik have a report in the Montreal Working Papers with results very similar to V-K?s on regulars vs. irregulars when a broader battery of items is used (i.e. the dissociation seems to have gone away....). (Let me note here that Myrna refers in various communications to the Montreal Working Papers as a ?publication?; however, it is not peer-reviewed in the usual sense, not available in libraries as an archival work, and wouldn?t pass as a publication by the standards that NIH applies to people who want grants from them; too bad, because we have a series called the Center for Research in Language Technical Reports that would greatly enhance our publication list if we were allowed to call it a ?publication?!). My point is that the V-K findings are based on many different instruments, tapping many different aspects of speech and language, in addition to the hated IQ tests. Those findings clearly indicate that this is NOT a disorder specific to any single aspect of language, and may not be specific to language at all, although language is certainly involved. In this regard, Fritz also complains about V-K?s finding that the afflicted members of the K family have an articulation disorder, suggesting that this too may somehow be an artifact of the testing situation. Here I have to say that I have seen videotapes of the family (recorded in an informal context, by the way), and I have heard their speech with my own ears. There is absolutely no question that these people have a severe speech production disorder, the kind that you would expect if they were (as ?standardized? tests show) suffering from buccal-facial apraxia, i.e. difficulty with complex movements of the tongue and mouth. Several years ago Gopnik gave a presentation at the Wisconsin Child Language Disorders Conference where she played audiotapes of the family (presumably these are audiotapes that she recorded herself, in the context that Fritz recommends for linguistically meaningful research). I was not present, but heard from more than a dozen speech-language pathologists who were present in that audience that the evidence for a serious speech disorder was undeniable. And yet this was not mentioned in the 1990 Nature letter, in the 1991 Cognition paper, or in various summaries of the Gopnik results by Pinker, Gazzaniga and others. No one is claiming here that the articulation disorder *CAUSED* the grammatical deficits observed in the K family. After all, these people do poorly on a host of receptive tests, grammatical and lexical, and on a number of non-verbal tasks as well, so we cannot attribute all their symptoms to this single cause. Nor is anyone claiming that the IQ difference is the *CAUSE* of the grammatical problem. The point is, simply, that this is not a specific disorder. It is not specific to regular morphemes, it is not specific to grammatical morphology in general, it is not specific to grammar, it may not even be specific to language. Fritz complains still further that the V-K paper is very short, only 3.5 pages long. The Proceedings of the National Academy of Sciences, like Science and Nature, requires brief reports, without a great deal of methodological detail (recall that Gopnik?s original letter to Nature, which started this controversey, was less than one page long). But the V-K results are clearly indicated in a summary table, with detailed statistics on each and every language and non-language measure. To be sure, it will be useful to learn more about their findings in subsequent papers (and such papers exist). However, brevity can be a virtue. The original report by Watson and Crick was not much longer than this, following normal practice in the journal Nature. The real questions are: Is it true, and if so, what does it mean? I am persuaded that the findings are true. This is a distinguished and respected research team, at a major research center, representing the fields of neuropsychology, neurology, and developmental psycholinguistics (e.g. Paul Fletcher, an eminent researcher in this field, certainly not naive about language and language development). They have used standardized tests that are recognized in this field, together with a new (non-standardized) measure specifically tailored (by anyone?s standards) to reflect the relevant facts of regular vs. irregular morphology in English. What do the findings mean? Well, I agree that many many questions remain to be answered, but at the very least these findings mean that this is not a specific deficit. The search for the grammar gene must continue.... There are other approaches to the same problem, relevant to the issues of genetic substrates and language specificity. A number of different laboratories are investigating a syndrome called Specific Language Impairment, defined to include children who are 1.5 to 2 standard deviations or more below the mean for their age on expressive (and sometimes receptive) language abilities despite non-verbal IQs in the normal range (i.e. no mental retardation), in the absence of frank neurological abnormality (e.g. cerebral palsy, hemiplegia), severe social-emotional disorders (e.g. no autism), uncorrectable visual or auditory deficits (i.e. they are not blind or deaf). It has been known for more than two decades that this disorder ?runs in families?, and Dorothy Bishop?s recent work comparing monozygotic and dizygotic twins with SLI suggests that it has a strong heritable component. Does this syndrome provide evidence for the grammar gene? Despite all these exclusionary criteria, many different laboratories have demonstrated that children with SLI suffer from subtle deficits in processing that are not specific to language (e.g. aspects of attention and perception), although a few laboratories still insist that they have found a relatively pure form of the disorder (e.g. recent claims by Van der Lely). With regard to the ?intra-linguistic? nature of SLI, dozens of linguistic and psycholinguistic studies of these children lead to the conclusion that the deficit is best characterized as one of delay (i.e. they look like younger normal children) rather than deviance (i.e. no evidence for qualitatively different error types or sequences of development from those observed in normal children). A large number of studies also show that the deficit affects many different aspects of language. However, it has been known for a long time (since Judith Johnston?s work 15 years ago with Schery and Kamhi) that grammatical morphology is the most vulnerable domain in SLI. Does the fact that morphology is MORE delayed than other aspects of language constitute evidence for a specific and genetically based grammatical disorder? Some investigators believe that is the case (e.g. Van der Lely; Gopnik; Rice; Clahsen). Others have argued instead that the grammatical deficits are secondary to the processing problems that these children display (e.g. Leonard; Bishop; Tallal). Data from our research center here in San Diego provide support for the latter view. First, my colleagues here (e.g. Wulfeck, Thal, Townsend, Trauner) are among those who have repeatedly found evidence for subtle non-verbal processing deficits and/or neuromotor defects in children who meet the above definition of SLI. Second, studies from my own laboratory have shown than grammatical morphology is the ?weak link in the processing chain? even for normal, neurologically intact college students. When these students are tested under adverse processing conditions (e.g. perceptually degraded stimuli, or compressed speech, or language processing with a cognitive overload), grammatical morphemes suffer disproportionately compared to every other aspect of the signal (e.g. content words, word order relationships). Taken together, these lines of evidence provide a reasonable case for the claim that the grammatical morphology deficits in SLI are secondary to (in this case, "caused by") a processing deficit that is not unique to language, although it has serious consequences for the ability to learn and process language on a normal timetable. This would help to explain why grammatical morphology is also an area of selective vulnerability in Down Syndrome, in the oral language of the congenitally deaf, in many different forms of aphasia (not just Broca?s aphasia), and in a range of other populations as well. (If anyone is interested, I have a review paper on this topic). One might suggest that SOME forms of SLI have this non-specific base, but others really are due to an innate, language-specific malfunction -- perhaps the genetic mutation discussed in last week?s Newsweek. One cannot rule this out without further evidence. It is interesting to note, however, a paper a few years ago by Tallal, Curtiss and colleagues separating their large sample of children with SLI into those with and without a family history of language disorders. There were no differences between the two subgroups in the nature of the language symptoms observed across a host of measures; however, the children with a family history were actually MORE likely (not less likely) to suffer from deficits in non-linguistic processing. Finally, I would refer you to a recent study by Wulfeck, Weissmer and Marchman, in a large SLI sample combining data from San Diego and Wisconsin, assessing the ability to generate regular and irregular past tense morphemes on a large and representative sample of items. Results clearly indicate that children with SLI are delayed in both aspects of grammatical morphology (i.e. there is no dissociation between regulars and irregulars), producing errors that are quite similar to those observed in younger normal children at various points in development (i.e. the usual story in research on SLI). I apologize for the length of this exercise, but I think it is important to get as many of the facts out as possible. Using terms like ?scandalous?, Newmeyer has implied that there is something deeply wrong, perhaps fraudulent in the Vargha-Khadem et al. results. He has no basis whatsoever for a claim of that kind. Their research project is state-of-the-art in the field of neuropsychology. Newmeyer may want to respond by arguing that all of neuropsychology is irrelevant to this debate, that linguists are the only ones who know how to assess language properly, that standardized tests are useless (as opposed to merely insufficient), and that research conducted in a laboratory setting is invalid precisely because it was conducted in a laboratory setting. If he is right, we are in dire straits, because 98% of our knowledge about language disorders and the brain bases of language in normal and abnormal populations falls into this category. The recent exchange on Funknet leads me to the gloomy conclusion that some of our colleagues in the field of linguistics espouse exactly this belief. I would prefer to re-issue the argument that got me into this exchange: we need all the methods, all the constraints, all the information that we can possibly find to build a reasonable theory of the human language faculty. -liz bates From colinh at OWLNET.RICE.EDU Tue Jan 28 21:24:04 1997 From: colinh at OWLNET.RICE.EDU (Colin Harrison) Date: Tue, 28 Jan 1997 14:24:04 -0700 Subject: Brain Imaging & Linguistics Message-ID: I wonder how many of you have read Jaeger et al's piece in Language (72.451-97) about areas of the brain that are activated during past tense formation in English. I understand that it's been quite a hit in some parts of the linguistic world, and it's certainly to be praised for its methodological rigor and the honesty of the authors. I am convinced that this sort of experimentation ought to represent a significant direction for future research (some of which I hope to be doing soon myself). The thing is, the experimenters interpret their results as showing that regular inflections are processed differently than irregular inflections, but I don't see that their theoretical conclusions follow from their data due to at least two major confounds. I wanted to put these ideas out to the rest of you Funknetters and see what y'all think. First up, semantic discrepancies between Jaeger et al.'s word lists represent a significant confound. The two lists of interest are the cue sets from which subjects had to form regular and irregular past tenses (sets 3 and 4). Jaeger et al. note that overall, the irregular past forms require more cortical activation than the regulars, and conclude that this is because they are not associated with an on-line rule system, and hence require more attention and greater resource devotion (p.487). But if you look at the meanings involved, a rather different explanation seems at least plausible. Each list comprises 46 tokens. Of these, the regular past list has just nineteen (41%) that are unambiguously human physical activities, involving limb movement. The irregular list shows a much higher proportion of human physical activities, 33 of the 46 tokens (73%). This looks like a significant difference to me! Might not the greater cortical activation noted in the irregular condition be a result of more widespread somatic activation as an intrinsic part of the meaning of the verbs, rather than anything to do with their morphosyntactic regularity? There is ample evidence emerging from imaging studies (follow up for instance the work of Hanna and Antonio Damasio), that the comprehension of words that are connected with any kind of somatic experience involves activation in some of the same areas as the instantiation of the experience itself. So, the meaning of a verb such as "walk" will involve indirect activation of the somato-sensory circuits necessary to walk, plus all those more peripherally involved in the experience of the activity etc. Jaeger et al's results look as if they represent disconnected activation patterns, but their results were not so neat and clean at first: they needed to "wash" a fair amount of "random" noise from their charts until they arrived at something resembling the neat, discrete pictures they presented. They are completely open about the normalising proceedures they follow, and it's all there in black and white for anyone who wants to examine it more closely than I have. My concern is, it's not unlikely that they could have "washed" out the evidence of simmilar somatic activation from the regular list, but the somatic activation in the irregular list would have been too large to remove in this way, leaving behind different activation patterns based not on algorithmic versus non-algorithmic processing (Jaeger et al's conclusion), but rather based on the semantic category of the verbs in each list. Secondly, even if we dismiss the first objection, the experimental design itself assumes the conclusion. That is, subjects in the test conditions were performing an algorithmic task at the behest of the examiner: "given x (a verb stem form), produce y (the past tense of the same verb)." It is not clear to me that information about brain activation during a predictable (and probably pretty boring) two-minute algorithmic task has any relevance to brain activation during production of similar forms when one is engaged in meaningful speech. In order to equate these two types of processing, you have to begin with the assumption that speakers inflect verbs according to an algorithmic procedure during on-line discourse production - exactly the kind of process whose centrality to natural language production is disputed! What do you think? Colin Harrison Dept. of Linguistics Rice University Houston TX 77030 USA From lmenn at CLIPR.COLORADO.EDU Wed Jan 29 16:09:18 1997 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn, Linguistics, CU Boulder) Date: Wed, 29 Jan 1997 09:09:18 -0700 Subject: Brain Imaging & Linguistics In-Reply-To: <199701282026.OAA05117@owlnet.rice.edu> Message-ID: It's a good point about the semantics. There's also the task variables I noted on the Info-childes network; if you want to see waht I said there, let me know. Lise Menn On Tue, 28 Jan 1997, Colin Harrison wrote: > I wonder how many of you have read Jaeger et al's piece in Language > (72.451-97) about areas of the brain that are activated during past tense > formation in English. I understand that it's been quite a hit in some > parts of the linguistic world, and it's certainly to be praised for its > methodological rigor and the honesty of the authors. I am convinced that > this sort of experimentation ought to represent a significant direction for > future research (some of which I hope to be doing soon myself). The thing > is, the experimenters interpret their results as showing that regular > inflections are processed differently than irregular inflections, but I > don't see that their theoretical conclusions follow from their data due to > at least two major confounds. I wanted to put these ideas out to the rest > of you Funknetters and see what y'all think. > First up, semantic discrepancies between Jaeger et al.'s word lists > represent a significant confound. The two lists of interest are the cue > sets from which subjects had to form regular and irregular past tenses > (sets 3 and 4). Jaeger et al. note that overall, the irregular past forms > require more cortical activation than the regulars, and conclude that this > is because they are not associated with an on-line rule system, and hence > require more attention and greater resource devotion (p.487). But if you > look at the meanings involved, a rather different explanation seems at > least plausible. > Each list comprises 46 tokens. Of these, the regular past list has > just nineteen (41%) that are unambiguously human physical activities, > involving limb movement. The irregular list shows a much higher proportion > of human physical activities, 33 of the 46 tokens (73%). This looks like a > significant difference to me! Might not the greater cortical activation > noted in the irregular condition be a result of more widespread somatic > activation as an intrinsic part of the meaning of the verbs, rather than > anything to do with their morphosyntactic regularity? There is ample > evidence emerging from imaging studies (follow up for instance the work of > Hanna and Antonio Damasio), that the comprehension of words that are > connected with any kind of somatic experience involves activation in some > of the same areas as the instantiation of the experience itself. So, the > meaning of a verb such as "walk" will involve indirect activation of the > somato-sensory circuits necessary to walk, plus all those more peripherally > involved in the experience of the activity etc. > Jaeger et al's results look as if they represent disconnected > activation patterns, but their results were not so neat and clean at first: > they needed to "wash" a fair amount of "random" noise from their charts > until they arrived at something resembling the neat, discrete pictures they > presented. They are completely open about the normalising proceedures they > follow, and it's all there in black and white for anyone who wants to > examine it more closely than I have. My concern is, it's not unlikely that > they could have "washed" out the evidence of simmilar somatic activation > from the regular list, but the somatic activation in the irregular list > would have been too large to remove in this way, leaving behind different > activation patterns based not on algorithmic versus non-algorithmic > processing (Jaeger et al's conclusion), but rather based on the semantic > category of the verbs in each list. > Secondly, even if we dismiss the first objection, the experimental > design itself assumes the conclusion. That is, subjects in the test > conditions were performing an algorithmic task at the behest of the > examiner: "given x (a verb stem form), produce y (the past tense of the > same verb)." It is not clear to me that information about brain activation > during a predictable (and probably pretty boring) two-minute algorithmic > task has any relevance to brain activation during production of similar > forms when one is engaged in meaningful speech. In order to equate these > two types of processing, you have to begin with the assumption that > speakers inflect verbs according to an algorithmic procedure during on-line > discourse production - exactly the kind of process whose centrality to > natural language production is disputed! > > What do you think? > > > Colin Harrison > Dept. of Linguistics > Rice University > Houston TX 77030 > USA > From colinh at OWLNET.RICE.EDU Wed Jan 29 18:37:39 1997 From: colinh at OWLNET.RICE.EDU (Colin Harrison) Date: Wed, 29 Jan 1997 11:37:39 -0700 Subject: Brain Imaging & Linguistics - addendum... Message-ID: Hi Funknetters! It's been pointed out to me that my use of the term "somatic/somato-" in my recent posting may be a little odd. For "somat-" read "motor". That should align me more cogently with accepted terminology! Thanks! Colin Harrison Dept. of Linguistics Rice University Houston TX USA From lamb at OWLNET.RICE.EDU Fri Jan 31 21:30:46 1997 From: lamb at OWLNET.RICE.EDU (Sydney M Lamb) Date: Fri, 31 Jan 1997 15:30:46 -0600 Subject: Neurologists and connectionism Message-ID: Tom --- In belated reaction to your entertaining "Get real, George" message of the 6th, i want to touch on an incidental point at least before we leave January forever. You wrote: > > in search of the latest fad. When I ask the real neurologists > I know what they think of connectionism, I get an incomprehension response. > Never heard of it. Con what? > Your statement has two implications: (1) that neurologists haven't heard of connectionism, (2) that neurologists have an expert knowledge of how the brain works that would enable them to pass judgement on the merits of connectionism. From my viewing platform it appears clear that both of these implications are false. 1. At least some neurologists, in fact some very highly regarded ones, have indeed heard of connectionism (1987 Rumelhart and McClelland variety). Two examples: It is described in Kandel, Schwarz, and Jessel, Principles of Neural Science (1991:836-7), which neurologists in medical schools recommend to medical students as "the bible" on neural structures and their operation; also in their "Essentials of Neural Science and Behavior" (1995). Likewise, Dr. Harold Goodglass, a very highly regarded neurologist and aphasiologist, mentions this model in his "Understanding Aphasia" (1993:37-8). 2. The above is not surprising. What is surprising is that this model, despite its being egregiously out of accord with known facts of neural structures and their operation, is mentioned favorably, as smthg to be taken seriously, by the above writers and others (esp. Kandel et al --- Goodglass gives it fainter praise; possibly, one hopes, he is just being diplomatic). Such favorable description by Kandel et al. and others is what demonstrates the falsity of your second implication. If the neurologists can so easily give credence to such an unrealistic model, they are not experts in this area after all. Yes, they are experts in other areas --- they know a lot about anatomy, synapses, neurotransmitters, ALS, how to diagnose neurological problems, etc. etc., but they have demonstrated that they don't have much of a clue about how information is learned, remembered, and used by real neural networks. Warmest regards, Syd .