From STRECHTER at CSUCHICO.EDU Fri Oct 2 22:41:49 1998 From: STRECHTER at CSUCHICO.EDU (Trechter, Sara) Date: Fri, 2 Oct 1998 15:41:49 -0700 Subject: job announcements Message-ID: The English Department at California State University, Chico announces two-tenure track (Assistant Professor) positions. The position in linguistics requires a Ph.D. in linguistics with research/training in discourse and cognitive/functional approaches to syntax and semantics; experience in teaching core-area linguistics courses as well as introduction to Second Language Acquisition. A research interest in non-Indo-European languages(s) is desirable. Tenure-track faculty are required to pursue research and publication and provide service to the university community. The teaching load is 4 courses per semester, and teaching responsibilities will include introduction to linguistics, introduction to syntax, introduction to second language acquisition: theory and methods, and graduate seminars in linguistics (as needed). The applied linguistics/TESOL position requires a Ph.D. in applied linguistics or TESOL (with a strong linguistics background); teaching experience in English for Academic Purposes programs in the US and ESL in a non-US setting, or ESL/bilingual programs in K-12 schools in the US. The position also involves advising ESL students, pursuing research and publication, and service to the university community. The teaching load is 4 courses per semester, including ESL, introduction to second language acquisition theories and methods, and a graduate seminar in second language acquisition. As a university that educates students of various ethnic and cultural backgrounds, we value a diverse faculty and staff and seek to create as diverse a pool of candidates as possible. Starting date for both positions is August 1999. Salary ranges from 37,956-40,692. Deadline for applications is December 3, 1998 (and continue as necessary). Please mail letter of application, current CV, and recommendations to: Karen C. Hatch Department of English California State University, Chico Chico, CA 95929-0830 ****************** Sara Trechter strechter at csuchico.edu From erhard.voeltz at UNI-KOELN.DE Mon Oct 5 10:25:43 1998 From: erhard.voeltz at UNI-KOELN.DE (Erhard Voeltz) Date: Mon, 5 Oct 1998 12:25:43 +0200 Subject: Ideophone Symposium Message-ID: The Institut für Afrikanistik, Universität zu Köln D-50923 Cologne, Germany/Allemagne wishes to announce the: International Symposium on Ideophones. January 25-27, 1999 F. K. Erhard Voeltz & Christa Kilian-Hatz, Conveners The symposium will convene at the: Arnold-Janssen-Haus Tagungsheim, Heimvolkshochschule Arnold-Janssen-Straße 24 D-53754 Sankt Augustin Preliminary Program Barry Alpher. Intonation, given and new, and ideophones in some indigenous languages of Australia. Yiwola Awoyale. Form-meaning interface in Yoruba ideophones. Amha Azeb. Ideophones in Wolaitta. Robert Blust. Is there a universal level of structure between the phoneme and the morpheme? Farida Cassimjee & Charles W. Kisseberth. The tonology of the ideophone in Emakhuwa. Tucker G. Childs. Ideophones, language variation, and language contact: Changes in Zulu. Denis Creissels. Ideophones as uninflected predicative lexemes in Setswana. Didier Demolin. Ideophones and sound symbolism in Hendo. Gérard Dumestre. L'intégration des éléments idéophoniques dans la langue : le cas du bambara. Francis O. Egbokhare. Phono-semantic correspondence in Emai attributive ideophones. Stefan Elders. Defining ideophones in Mundang. Vesa Jarva. Expressive use of words of foreign origin. Christa Kilian-Hatz. Ideophone or not? Sataro Kita. Two-dimensional semantic analysis of Japanese ideophones. Merja Koistinen. Syntactic structure of expressive words: Colorative construction. Daniel P. Kunene. Speaking the act: The ideophone as a linguistic rebel. Omen N. Maduka-Durunze. Phonosemantic hierarchies. Ngandu-Myango Malasi. Comportement des idéophones en lega (D. 25). William McGregor. Ideophones as the source of verbs in northern Australian languages Eve Mikone. Are there ideophones in the Balto-Finnic languages? Paul Newman. Are ideophones really as weird and extra-systematic as linguists make them out to be? Philip A. Noss. Ideas, phones and Gbaya verbal art. Janis Nuckolls. Sound symbolic grammar of Pataza Quechua. George Poulos & C. T. Msimang. The ideophone in Zulu--a reexamination of descriptive and conceptual notions. Paulette Roulon-Doko. Statut des idéophones en gbaya, langue oubanguienne de Centrafrique. William J. Samarin. One tale, one narrator, three languages: Ngbandi, Sango, French. S. C. Satyo. Ideophones and ideograms in Xhosa. Ronald P. Schaefer. Ideophonic adverbs and manner gaps in Emai. Eva Schultze-Berndt. Traces of ideophones in complex predicates of Northern Australia. Tasso Okombe. La formation des radicaux verbaux déidéophoniques en tetela (dialecte ewango). Jess Tauber. Complementary distribution of distinct “ideophone” classes in left- vs. Right-headed languages. F. K. Erhard Voeltz. The sound of silence: Contributions of African languages to our understanding of the ideophone. Richard L. Watson. A comparison of ideophones in Southeast Asia and Africa. For further information: Tel: 49.221.470.4741 Fax: 49.221.470.5158 e-mail: erhard.voeltz at uni-koeln.de e-mail: christa.kilian at uni-koeln.de From W.Croft at MAN.AC.UK Mon Oct 5 14:12:45 1998 From: W.Croft at MAN.AC.UK (Bill Croft) Date: Mon, 5 Oct 1998 15:12:45 +0100 Subject: Storage parsimony vs. computing parsimony Message-ID: Dear Funknetters, There is a strong methodological imperative in almost all formalist grammatical analyses, and in a fair number of functionalist analyses, towards what I call "storage parsimony". What I mean is that, in aiming towards "simplicity", "elegance", "[unmodified] parsimony", analyses are proposed with minimum redundancy, the fewest number of distinct underlying lexical forms or syntactic construction types---that is, the fewest items that have to be stored, in a lexicon, morphological paradigm, set of syntactic rules, phoneme segment inventory, etc. The result of this is that the analyses require a lot of computation, using derivational rules, linking rules, inheritance, filling in of unspecified information, etc.; plus the constraints that turn out to be required in order to make the computations happen in the right places but not in the wrong places. In other words, storage parsimony leads to computing complexity. A number of functionalist models eschew storage parsimony, opting instead for computing parsimony. These are the "usage-based" models, polysemy/radial category models, Bybee-style morphological models, construction grammar models which allow redundant specification of constructional information in the network, etc. The rationale behind such models is that human beings are a lot better at strorage and retrieval of information than at computational operations involving complex symbol-manipulation. Of course, computing parsimony, used as a way to judge analyses, is a methodological imperative as much as storage parsimony is; but at least it supposedly has the merit of being closer to what psychologists believe about how human beings' minds work. My question is: what IS the psychological or psycholinguistic evidence that people are better at storage/retrieval than at computation (of the symbol-manipulating sort that I alluded to)? I had this question posed to me when I was once defending the computing parsimony approach, and I didn't know of any references to make this point. Please give me specific references to the literature, if possible, since I would like to refer to these methodological approaches in a book I'm writing. And I would also like to know approximately how widely it is believed among psychologists that people are better at storage/retrieval than at symbol computation. Thanks very much, Bill Croft Dept of Linguistics Univ of Manchester w.croft at man.ac.uk From macw at CMU.EDU Mon Oct 5 17:20:55 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Mon, 5 Oct 1998 11:20:55 -0600 Subject: storage Message-ID: Bill, Yes, psychology is rich with evidence regarding the storage-retrieval issue. I suppose that one could easily locate 500 recent articles on this topic. Going to this literature for specific references to cite is fine, but perhaps you also want to get a lay of the land. Consider the issue of reading a new unknown word vs reading a known word. For the known word, we may do some phoneme-grapheme analysis, but we have a well-greased lexically specific route to the word's sound. The more frequent the word, the faster we read it, etc. My guess is that half of the papers in the area of word recognition report significant frequency effects. Consider a simple result from Stemberger and MacWhinney (1986, Memory and Cognition). We found that high frequency regular past tense forms (wanted) are more resistant to irregularizing speech errors than are low frequency regular past tense forms (panted). We believe that this is due to the fact that high frequency forms are more thoroughly stored for unitized rote retrieval. More broadly, psychologists tend to view this in terms of the power law of learning. Any process that is done repeatedly gets stronger according to a power function. Typically, this is understood as occuring through chunking. When lots of pieces tend to occur together frequently, the whole starts to functions as a single chunk. There is a lot of research and modeling literature on this. People at CMU like Anderson, Simon, Newell, Just, etc. have done lots on this. One twist on the power law emphasizes how learning of a given form leads to entrenchment. Joan Bybee's work certainly builds on such findings and elaborates them. Another way of viewing this is in terms of the strategy-selection model as developed by Reder, Siegler, and others, mostly for math, reasoning, etc.. When confronted with a given problem, we have to chose whether to retrieve or analyze. We usually apply a quick filter on the problem that decides which of these two approaches will be most useful and then go from there. For language, this framework might be useful for high level strategy choices in complex syntax and pragmatics. This is an enormous area. Reading the recent work in this area is somewhat complicated by the fact that current models emphasize connectionist modeling which tends to distract from the issue you are asking. For this reason, you might find textbooks from the early 80s or even 70s clearer on these issues than some recent textbooks. However, if you stick with models like John Anderson's ACT-R model as reported in his recent books and textbooks, the role of frequency is clearly highlighted. By the way, none of these remarks have anything to do with the really really complex symbol manipulation models linguists often propose. Instead, for psycholinguists, the dichotomy is usually between extremely trivial rules like "add -ed" or "ph sounds like f". We pretty much discarded any belief in the psychological reality of really complex formal linguistic rules in the 1970s. This is not to say that psycholinguists are not interested in abstract categories. Some are still playing around with traces, universal parts of speech, abstract syntactic structure, and the like, but this work seldom gets grounded on really complex and abstract rule systems. --Brian MacWhinney From bralich at HAWAII.EDU Wed Oct 7 02:32:30 1998 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Tue, 6 Oct 1998 16:32:30 -1000 Subject: Patent Translator/Syntacticians needed at Ergo Message-ID: Our U.S. patent on our NLP technology was allowed just several months ago and we now need to find translations of the patent document for international patents. The patent is 114 pages in length (including drawings and abstract), and it must be translated into the major languages of the world for International Patents in the major countries of the world. Please send bids along with a resume which includes experience with theoretical syntax and patents. Be sure and include an estimate of the time required to complete the translation and a track record of completed translations. Necessary languages will include French, Spanish, Russian, Arabic, Chinese (of the three districts), Japanese, Korean, German, Thai, etc. Please send bids and resumes to the address below. Bids from organizations which can handle more than one of the required langauges would be welcomed. You may preview our software on our web site at http://www.ergo-ling.com. You can also download some of our products there as well. A copy of the patent will be provided to those who submit acceptable bids and resumes. The job will begin in late 1998 or early 1999. Phil Bralich Philip A. Bralich, President Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 tel:(808)539-3920 fax:(880)539-3924 From W.Schulze at LRZ.UNI-MUENCHEN.DE Wed Oct 7 08:50:15 1998 From: W.Schulze at LRZ.UNI-MUENCHEN.DE (Wolfgang Schulze) Date: Wed, 7 Oct 1998 10:50:15 +0200 Subject: Storage parsimony vs. computing parsimony Message-ID: Dear Bill, I cannot go beyond what Brian WcWhinney told you (esp. because I'm a linguist and not a (trained) psychologist). But I think recent work on linguistic constructivism (or vice versa) will provide you with some relevant arguments. I myself have developed some theoretical proposals concerning the whole question. They are part of my language/grammar theory approach "Grammar of Scenes and Scenarios" (GSS) that is (fragmentarilly) documented in my book "Person-Klasse-Kongruenz. Fragmente einer Kategorialtypologie des einfachen Satzes in den ostkaukasischen Sprachen" (vol 1 (in two parts) "Die Grundlagen" (to appear next week at LINCOM (Munich). Don't worry about "East Caucasian": That's "only" the field of evaluation for GSS that I have chosen. Vol.1 is basically theoretical.- Also see what the NTL people do in Berkeley. Prof. Dr. Wolfgang SCHULZE Institut für Allgemeine und Indogermanische Sprachwissenschaft * Universität München Geschwister-Scholl-Platz 1 * D-80539 München Tel.: +89-2180 2486 http://www.lrz-muenchen.de/~wschulze/ From cumming at HUMANITAS.UCSB.EDU Wed Oct 7 21:24:42 1998 From: cumming at HUMANITAS.UCSB.EDU (Susanna Cumming) Date: Wed, 7 Oct 1998 14:24:42 -0700 Subject: Phonetics/Phonology job -- UCSB Message-ID: UNIVERSITY OF CALIFORNIA SANTA BARBARA DEPARTMENT OF LINGUISTICS FACULTY POSITION IN PHONETICS OR PHONOLOGY The Linguistics Department at the University of California, Santa Barbara, seeks a linguist for an Assistant Professor, tenure-track position in phonetics or phonology, to take effect July 1, 1999. Ability to teach courses in both fields is preferred. Active research on a variety of languages, and in one or more of the following areas, will be considered a plus: competing theories in phonology or phonetics; phonetics-phonology interactions; natural discourse prosody; instrumental phonetic analysis; cross-linguistic, typological, developmental, or historical-comparative studies of phonological-phonetic systems. Ph.D. required at time of appointment. Applications must include a cover letter, curriculum vitae, and representative publications. Candidates should arrange to have three letters of recommendation sent by application date. Preliminary interviews will be held in January, 1999 at the LSA meeting in Los Angeles. For full consideration, applications must be received by December 11. Address inquiries and applications to: Search Committee Dept. of Linguistics UC Santa Barbara Santa Barbara, CA 93106 e-mail: Lingsearch at humanitas.ucsb.edu Fax (805) 893-7769 UCSB is an equal opportunity/affirmative action employer. Women and minorities are encouraged to apply. From pip at HN.PL.NET Thu Oct 8 02:25:23 1998 From: pip at HN.PL.NET (Philippa Nicoll) Date: Thu, 8 Oct 1998 15:25:23 +1300 Subject: Patent Translator/Syntacticians needed at Ergo In-Reply-To: <2.2.16.19981006162542.0ef7bf46@mana.htdc.org> Message-ID: At 04:32 PM 10/6/98 -1000, Philip A. Bralich, Ph.D. wrote: >Our U.S. patent on our NLP technology was allowed just >several months ago and we now need to find translations of the >patent document for international patents. The patent is 114 >pages in length (including drawings and abstract), and it must >be translated into the major languages of the world for >International Patents in the major countries of the world. >Please send bids along with a resume which includes experience >with theoretical syntax and patents. Be sure and include an >estimate of the time required to complete the translation and a >track record of completed translations. Necessary languages will >include French, Spanish, Russian, Arabic, Chinese (of the three >districts), Japanese, Korean, German, Thai, etc. Please send >bids and resumes to the address below. Bids from organizations >which can handle more than one of the required langauges would be >welcomed. You may preview our software on our web site at >http://www.ergo-ling.com. You can also download some of our >products there as well. A copy of the patent will be provided to >those who submit acceptable bids and resumes. > >The job will begin in late 1998 or early 1999. > >Phil Bralich > > >Philip A. Bralich, President >Ergo Linguistic Technologies >2800 Woodlawn Drive, Suite 175 >Honolulu, HI 96822 >tel:(808)539-3920 >fax:(880)539-3924 THIS EMAIL IS NO LONGER WITH THE PEOPLE YOU THINK IT IS... sorry.. From pip at HN.PL.NET Thu Oct 8 02:25:29 1998 From: pip at HN.PL.NET (Philippa Nicoll) Date: Thu, 8 Oct 1998 15:25:29 +1300 Subject: Storage parsimony vs. computing parsimony In-Reply-To: <361B2B47.5741@mail.lrz-muenchen.de> Message-ID: At 10:50 AM 10/7/98 +0200, Wolfgang Schulze wrote: >Dear Bill, > >I cannot go beyond what Brian WcWhinney told you (esp. because I'm a >linguist and not a (trained) psychologist). But I think recent work on >linguistic constructivism (or vice versa) will provide you with some >relevant arguments. I myself have developed some theoretical proposals >concerning the whole question. They are part of my language/grammar >theory approach "Grammar of Scenes and Scenarios" (GSS) that is >(fragmentarilly) documented in my book "Person-Klasse-Kongruenz. >Fragmente einer Kategorialtypologie des einfachen Satzes in den >ostkaukasischen Sprachen" (vol 1 (in two parts) "Die Grundlagen" (to >appear next week at LINCOM (Munich). Don't worry about "East Caucasian": >That's "only" the field of evaluation for GSS that I have chosen. Vol.1 >is basically theoretical.- Also see what the NTL people do in Berkeley. > >Prof. Dr. Wolfgang SCHULZE >Institut für Allgemeine und Indogermanische >Sprachwissenschaft * Universität München >Geschwister-Scholl-Platz 1 * D-80539 München >Tel.: +89-2180 2486 >http://www.lrz-muenchen.de/~wschulze/ THIS EMAIL IS NO LONGER WITH THE PEOPLE YOU THINK IT IS... sorry.. From pip at HN.PL.NET Thu Oct 8 02:25:42 1998 From: pip at HN.PL.NET (Philippa Nicoll) Date: Thu, 8 Oct 1998 15:25:42 +1300 Subject: Phonetics/Phonology job -- UCSB In-Reply-To: Message-ID: At 02:24 PM 10/7/98 -0700, Susanna Cumming wrote: >UNIVERSITY OF CALIFORNIA SANTA BARBARA >DEPARTMENT OF LINGUISTICS >FACULTY POSITION >IN >PHONETICS OR PHONOLOGY > >The Linguistics Department at the University of California, Santa Barbara, >seeks a linguist for an Assistant Professor, tenure-track position in >phonetics or phonology, to take effect July 1, 1999. Ability to teach >courses in both fields is preferred. Active research on a variety of >languages, and in one or more of the following areas, will be considered a >plus: competing theories in phonology or phonetics; phonetics-phonology >interactions; natural discourse prosody; instrumental phonetic analysis; >cross-linguistic, typological, developmental, or historical-comparative >studies of phonological-phonetic systems. Ph.D. required at time of >appointment. > >Applications must include a cover letter, curriculum vitae, and >representative publications. Candidates should arrange to have three >letters of recommendation sent by application date. Preliminary interviews >will be held in January, 1999 at the LSA meeting in Los Angeles. For full >consideration, applications must be received by December 11. > >Address inquiries and applications to: > >Search Committee >Dept. of Linguistics >UC Santa Barbara >Santa Barbara, CA 93106 >e-mail: Lingsearch at humanitas.ucsb.edu >Fax (805) 893-7769 > >UCSB is an equal opportunity/affirmative action employer. Women and >minorities are encouraged to apply. THIS EMAIL IS NO LONGER WITH THE PEOPLE YOU THINK IT IS... sorry.. From spikeg at OWLNET.RICE.EDU Thu Oct 8 18:30:24 1998 From: spikeg at OWLNET.RICE.EDU (Spike Gildea) Date: Thu, 8 Oct 1998 13:30:24 -0500 Subject: Job Announcement (fwd) Message-ID: HEBREW TEACHING POSITION AT THE UNIVERSITY OF OREGON The University of Oregon seeks a full-time Instructor in modern Hebrew, beginning Fall, 1999. Applicants should have native or near-native competence, MA or better in Hebrew or field related to language learning and teaching, experience and skill in college-level language teaching. Applications should include cv, at least 3 letters of reference, evidence of excellence in teaching including student evaluations, syllabi, related teaching materials, and description of instructional methods, goals, and background for teaching Hebrew. Duties will include first- and second-year modern Hebrew, and other courses to be determined on the basis of the appointee's background and the program's needs. Send applications to Professor Richard L. Stein Hebrew Language Search Committee English Department University of Oregon Eugene, or 97403-1294 Phone: (541) 346-3971 FAX to (541) 346-2220. e-mail: rstein at oregon.uoregon.edu Please refer in your inquiry to to HEBREW SEARCH. Review of applications will begin January 1, 1999. The University of Oregon is an equal- opportunity, affirmative action institution committed to cultural diversity and compliance with the Americans with Diasabilities Act. From chafe at HUMANITAS.UCSB.EDU Sat Oct 10 16:35:07 1998 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Sat, 10 Oct 1998 09:35:07 -0700 Subject: Storage parsimony vs. computing parsimony In-Reply-To: Message-ID: Dear Bill, I don't have any very complete answer to your question, but, for one thing, I think it is well established that children learn huge amounts of vocabulary over a relatively short period of time. Even Pinker mentioned this in The Language Instinct, and I think he provided a reference or two. Furthermore, in working with a couple polysynthetic languages over many years, it has become quite clear to me that people learn huge numbers of those long words by rote, often relating them to the particular situations where they first heard them. To a large extent they to not CONSTRUCT them according to some system a linguist might suppose they use. They ARE able to come up with neologisms from time to time, but more by analogy, and by applying some simplified patterns quite different from what linguists come up with. This is a question I'm much interested in too, and I'll also be happy to hear of research in this direction. Wally Chafe From tomas at EVA.MPG.DE Sun Oct 11 15:54:44 1998 From: tomas at EVA.MPG.DE (tomas) Date: Sun, 11 Oct 1998 10:54:44 -0500 Subject: Croft's Question Message-ID: I posed Bill Croft's question to my colleague Larry Barsalou and below is the very informative answer I got. Mike Tomasello =============== Mike (and Bill), This issue has been at the heart of the exemplar-prototype debate in the categorization literature. Whereas prototypes require a lot of computation during learning (to abstract prototypes), they require little computation during transer (matching to a single prototype for a category). The result is parsimonious storage (a single prototype). In contrast, exemplar models have very simple computation during learning (the simple recording of an exemplar), but this results in complex storage (many exemplars for a category) and complex transfer computations (matching the entity to be categorized to a subset or all exemplars). Surprisingly, perhaps, the evidence overwhelmingly supports exemplar models. This suggests that the human cognitive system is not very good at abstraction, so it opts for simple learning computations (much work on concept learning further indicates how bad people are at extracting rules). This further suggests that human storage and retrieval are powerful, given the human cognitive system seems capable of storing much detailed information and retrieving it during categorization (as well as matching it to the item to be categorized). I'm embarrassed to say that the paper that probably does the best job of discussing these tradeoffs is a paper of my own: Barsalou, L.W., & Hale, C.R. (1993). Components of conceptual representation: From feature lists to recursive frames. In I. Van Mechelen, J. Hampton, R. Michalski, & P. Theuns (Eds.), Categories and concepts: Theoretical views and inductive data analysis (97-144). San Diego, CA: Academic Press. If you or Bill would like a copy, I'd be glad to send it to you. It goes into considerable detail about exemplar, prototype, and connectionist models on these issues, specifically discussing the costs of storage vs. computation for each type of model. A related debate exists in problem solving. Originally, everyone believed that the human cognitive system pieces together rules or productions to produce solutions to problems (inexpensive storage of a few widely applicable rules, and expensive computations of chaining them together while searching a search space). Now, few people believe that this characterizes the bulk of human problem solving. Instead, people appear to store cases (i.e., exemplars) and do case-based reasoning (much like exemplar-based categorization). The people who have done the most work on this are Brian Ross (Psych), Keith Holyoak (Psych), Janet Kolodner (AI), Kris Hammond (AI). I'd be glad to send references if you like, but their papers are widely available. Finally, this tradeoff between computation and storage is manifest in many current theories of skill. Essentially, novices are viewed as having stored few cases, and so have to compute, whereas experts are viewed as having stored many cases, and so don't have to compute (they just retrieve). This goes back to Chase and Simon's work on chunking and chess, and it can be found in the modern theories of John Anderson (ACT*), Gordon Logan (exemplar-based skill model), and Alan Newell (SOAR). Each includes two ways of producing a behavior--computation vs. retrieval--and assumes that novices mostly compute but increasinly retrieve as they become expert. Again, I'd be happy to send references if you have trouble locating them. As you can see, the storage/computation distinction has been central in psychology for a long time, and it consistently tends to suggest that the human cognitive system capitalizes tremendously on complex storage. I suspect that the distinction is relevant elsewhere as well. I hope that this is helpful. If you have any questions or want to discuss anything further, please let me know. L P.S. Mike, I don't have Bill's email address, so if you want him to see this message, please forward it to him. Thanks. From kemmer at RUF.RICE.EDU Tue Oct 13 04:14:41 1998 From: kemmer at RUF.RICE.EDU (Suzanne E Kemmer) Date: Mon, 12 Oct 1998 23:14:41 -0500 Subject: storage and computation Message-ID: Wally's mention of the acquisition perspective, and the fact that I just heard a lot about acquisition at the CSDL conference, made me think about the relevance to Bill's question of Mike Tomasello's 'verb island' and related work. To elaborate on Wally's point: Kids not only learn huge numbers of words very quickly, but they're very slow to learn general constructional patterns. The clause-level syntactic patterns they use are for a long time tied very tightly to individual verbs; constructions don't just come into their grammar across the board (Tomasello's result). If humans were made to prefer computational processing of constructions over storage, you'd think that general constructional patterns would be easier and earlier acquired, and storage of large numbers of lexical and syntactic units would come later. Of course, child language is just the extreme case of the connection between lexical items and syntactic constructions; constructions stay lexically tied to some degree all the speaker's life. If syntactic patterns were really the result of on-line composition of syntactic categories via rules, independently of lexical items, then we would expect no particular connection of specific syntactic constructions to stored lexical units. Nor would we expect the huge number of collocations you get in language. (Why should lexical bits in one part of the 'tree' affect other bits? Which they do, well beyond subcategorization.) Given that we do have all these fixed and semi-fixed phrases, people should presumably prefer to just compute these as needed, but instead these elements show evidence of being stored (e.g. phonological reduction). If composition were easier than storage, then the most FREQUENT stuff should be processed via composition rather than just stored whole. I assume that psychological results show us the opposite. (Too bad the Utrecht workshop on Storage and Computation is not likely to hear much from psychologists, or cognitive linguists...) --Suzanne Kemmer From john at RESEARCH.HAIFA.AC.IL Tue Oct 13 13:31:49 1998 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Tue, 13 Oct 1998 15:31:49 +0200 Subject: storage and computation Message-ID: I've been quite interested to read here about specific findings regarding how people learn language. As I have been observing the linguistic development of my now 6-year-old daughter, I have been coming to the conclusion that anyone who thinks that children develop their language by relying more on rule generalization than on retrival either doesn't have children or doesn't pay any attention to what they say. I have repeatedly attempted to use Hebrew morphology playfully to make jokes with Shayna and I have repeatedly been disappointed that she just doesn't get it, even though she can use the morphological forms she's already encountered perfectly. I have struggled mightily to explain to her the subregularities in the Hebrew morphological system, but this has no effect whatsoever. She transparently has no clue of the morphological structure of the agglutinative Japanese forms she uses regularly. Unfortunately for linguistics, the formalists whose worldview would most benefit from the findings reported in some of the recent postings here aren't interested in experiments which take them out of their armchairs. John Myhill From druuskan at CC.HELSINKI.FI Tue Oct 13 15:07:06 1998 From: druuskan at CC.HELSINKI.FI (Deborah D K Ruuskanen) Date: Tue, 13 Oct 1998 18:07:06 +0300 Subject: Storage and computation Message-ID: While wishing to second all that has been said regarding cross-disciplinary work, I think it should be pointed out that the field of translation may have been overlooked as a means of checking theories of retrieval. Machine translation has tried to use every larger memories to store and retrieve translations once made and match them against translations to be made: this method simply does not work if the translation to be done is not an *exact* match. So much for retrieval. However, if it *is* an exact match, then retrieval saves mucho time for translators, who can then concentrate on *new* translation. I think children retrieve things in a similar way: if I used this before and it worked in this situation, I'll try it again. If it doesn't work, the child has to come up with something *new* and try to figure out *why* it didn't work. My theory is that translation works by elimination (through the application of context) rather than scanning of everything in the memory. This may be another way of saying that we first find *where* we stored something, and then *retrieve* it. And how many NLP and MT (machine translation) people are reading this list? DKR -- Deborah D. Kela Ruuskanen \ You cannot teach a Man anything, Leankuja 1, FIN-01420 Vantaa \ you can only help him find it druuskan at cc.helsinki.fi \ within himself. Galileo From TWRIGHT at ACCDVM.ACCD.EDU Tue Oct 13 16:34:10 1998 From: TWRIGHT at ACCDVM.ACCD.EDU (Tony A. Wright) Date: Tue, 13 Oct 1998 11:34:10 CDT Subject: storage and computation Message-ID: John Myhill wrote: > I've been quite interested to read here about specific findings regarding how > people learn language. As I have been observing the linguistic development > of my now 6-year-old daughter, I have been coming to the conclusion that > anyone who thinks that children develop their language by relying more on > rule generalization than on retrival either doesn't have children or > doesn't pay any attention to what they say. I have spent plenty of time with children who convince me that they rely a great deal on both rule generalization AND retrieval. Children notoriously regularize high-occurrence irregular verbs, i.e., "goed" for "went". Why would they do this if they were primarily retrieving? Why do children exhibit developmental patterns in syntax that are nothing like adult speech? Are their incipient retrieval capacities too limited and increase with age? --Tony Wright From Joe.Fullmer at 1790.COM Tue Oct 13 18:24:58 1998 From: Joe.Fullmer at 1790.COM (Fullmer, Joseph) Date: Tue, 13 Oct 1998 12:24:58 -0600 Subject: storage and computation In-Reply-To: <19981013.113410.TWRIGHT@ACCD.EDU> Message-ID: Along the lines of "goed" for "went", the other day my little niece was playing with a top, and her little brother wanted a turn. So her mother (my sister) told her she had three more tries. After she 'did it' once, I asked her how many turns she had left. She replied, "This is my two-th turn and after that I have one more." She used rule generalization to generate "two-th" as the ordinal for "two". I wasn't sure how to elicit whether she would say "One-th" for "first" or if she would 'retrieve' "first". After her second turn, I asked her what turn she was on trying to elicit whether she would get "three-th" for "third", but she answered "it's my last turn". Because the higher ordinal forms are more iconic (similarity in form), and the iconic relationship repeats several times, a rule is easily generated. On the other hand, the relationship of "one" to "first" and "two" to "second" is not apparent from the forms, and must be 'stored and retrieved' to get it right. Looks like rule-generalization is winning out here. (although in order to even generalize a rule in the first place, retrieval must be used.) Also of interest is that most languages follow this pattern of one and two having constructive relationships with their ordinals, while the higher ones have a conformative relationship. For a great explanation of this, see John S. Robertson: The History of Tense / Aspect / Mood / Voice in the Mayan Verbal Complex. He analogously relates C.S. Peirce's Icon, Index, and Symbol to Confomative, Reciprocal, and Constructive. Less marked categories of 2^m grammatical paradigms tend to have more constructive relationships (must use storage and retrieval), whereas more marked categories tend to have conformative relationships (able to use rule-generalizations). So, if we place "go" / "gyrate" in a 2^m paradigm with the present and simple past, we get go / went (*constructive relationship) gyrate / gyrated (*conformative relationship) Because gyrate / is much more highly marked (much more narrow meaning and application) we would expect it to have more conformative relationships than the less-marked go / went, and this is just what we see. According to the law of inverse proportionality (more breadth = less depth, less breadth = more depth) less-marked forms will have a much greater external manifestation (e.g. frequency of occurence, range of reference, and even length of words in many cases). There is also a law of direct proportionality, which bears on the topic, but i think i am growing long-winded, so will stop. But I would highly recommend the first two chapters (only 45 pages) of Robertson's book for all linguists (no knowledge of Mayan languages required) and the entire book for Mayanists. In any case, I believe the assesment that children rely a great deal on both rule-generalization and storage-and-retrieval is correct. With less-marked forms, there will be a need for more storage and retrieval, since relationships among forms will be more constructive, whereas with more-marked forms, rule-generalization will be more heavily relied on, since form relationships will be more conformative. In the case of "goed" instead of "went" and "two-th" instead of "second" it would seem to me that a child is actually in the experimentation stage, such that he/she is trying to determine whether the form is conformative or constructive. So, the child applies the expected conformative rule, and depending on the response (correction or acceptance) the form is reinforced as to which type it is, and how to appropriately deal with it (should I store this form for remembering later, or will the rule do?) > -----Original Message----- > From: FUNKNET -- Discussion of issues in Functional Linguistics > [mailto:FUNKNET at LISTSERV.RICE.EDU]On Behalf Of Tony A. Wright > Sent: Tuesday, October 13, 1998 10:34 AM > To: FUNKNET at LISTSERV.RICE.EDU > Subject: Re: storage and computation > > > John Myhill wrote: > > > I've been quite interested to read here about specific findings > regarding how > > people learn language. As I have been observing the linguistic > development > > of my now 6-year-old daughter, I have been coming to the conclusion that > > anyone who thinks that children develop their language by > relying more on > > rule generalization than on retrival either doesn't have children or > > doesn't pay any attention to what they say. > > I have spent plenty of time with children who convince me that they rely > a great deal on both rule generalization AND retrieval. > > Children notoriously regularize high-occurrence irregular verbs, > i.e., "goed" > for "went". Why would they do this if they were primarily retrieving? > Why do children exhibit developmental patterns in syntax that are > nothing like > adult speech? Are their incipient retrieval capacities too limited and > increase with age? > > --Tony Wright > From ward at PG-13.LING.NWU.EDU Tue Oct 13 20:44:32 1998 From: ward at PG-13.LING.NWU.EDU (Gregory Ward) Date: Tue, 13 Oct 1998 15:44:32 CDT Subject: storage and computation In-Reply-To: <000001bdf6d6$c8cf8780$5edcbb80@chssaddct.rn.byu.edu>; from "Fullmer, Joseph" at Oct 13, 98 12:24 pm Message-ID: > In any case, I believe the assesment that children rely a great deal on both > rule-generalization and storage-and-retrieval is correct. With less-marked > forms, there will be a need for more storage and retrieval, since > relationships among forms will be more constructive, whereas with > more-marked forms, rule-generalization will be more heavily relied on, since > form relationships will be more conformative. In the case of "goed" instead > of "went" and "two-th" instead of "second" it would seem to me that a child > is actually in the experimentation stage, such that he/she is trying to > determine whether the form is conformative or constructive. So, the child > applies the expected conformative rule, and depending on the response > (correction or acceptance) the form is reinforced as to which type it is, > and how to appropriately deal with it (should I store this form for > remembering later, or will the rule do?) Speaking of ordinal/cardinal forms, Richard Sproat and I noticed (Lg 67) that in cases like "This is the second time in as many weeks", it is the semantically transparent (and well-instantiated) relationship between ordinals and cardinals (irrespective of their surface realizations) that allows the apparent mismatch. Gregory -- Gregory Ward Department of Linguistics Northwestern University 2016 Sheridan Road Evanston IL 60208-4090 e-mail: gw at nwu.edu tel: 847-491-8055 fax: 847-491-3770 www: http://www.ling.nwu.edu/~ward From jkaplan at MAIL.SDSU.EDU Tue Oct 13 15:39:21 1998 From: jkaplan at MAIL.SDSU.EDU (Jeffrey P. Kaplan) Date: Tue, 13 Oct 1998 16:39:21 +0100 Subject: storage vs. computation Message-ID: A few years ago my then-3- or 4-year old spontaneously remarked that a three-tined fork was not a fork but actually a "threek." Under elicitation, he proceeded to predict a [tuk] and a [w^nk] ("onek") (with velar nasal). Jeff Kaplan Jeffrey P. Kaplan Linguistics San Diego State University San Diego, CA 92182-7727 619-594-5879 fax 619-594-4877 http://www-rohan.sdsu.edu/faculty/jeff315/index.html From jrubba at POLYMAIL.CPUNIX.CALPOLY.EDU Tue Oct 13 23:33:04 1998 From: jrubba at POLYMAIL.CPUNIX.CALPOLY.EDU (Johanna Rubba) Date: Tue, 13 Oct 1998 16:33:04 -0700 Subject: sound symbolism and features Message-ID: Hi, everybody. I've been following the computation/storage discussion with interest. I have a question in a different area. I'm teaching a grad intro ling course for people interested mainly in literature, and I do a lot of ling. analysis of lit. in this class (which is loads of fun, by the way!) We're just finishing our unit on phonology and I've been cruising around the web for stuff on sound symbolism. Maybe some of you know of some sources on a very specific area I am interested in: the correlation of particular _distinctive features_ with properties in other sensory domains (e.g. of +continuant with 'smooth' or 'velarized' with 'dark'). I know that _segments_ have received lots of attention, and I've seen some initial signs of work with features on commercial websites (creators of corporate names and brand names). Does anyone know of work that seeks empirical confirmation of cross-modal associations for particular _features_ rather than segments (by, for example, manipulating feature makeup of sounds/words and surveying scientifically-sound subject pools for consistency of association)? Work being done across cultures would, of course, be really interesting. Thanks for any leads you can offer! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Johanna Rubba Assistant Professor, Linguistics ~ English Department, California Polytechnic State University ~ San Luis Obispo, CA 93407 ~ Tel. (805)-756-2184 Fax: (805)-756-6374 ~ E-mail: jrubba at polymail.calpoly.edu ~ Home page: http://www.calpoly.edu/~jrubba ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From klebaum at UCLA.EDU Wed Oct 14 01:20:59 1998 From: klebaum at UCLA.EDU (PAMELA PRICE KLEBAUM) Date: Tue, 13 Oct 1998 18:20:59 -0700 Subject: storage and computation In-Reply-To: Message-ID: I think you are wrong. The investigation of child language acquisition is looking at empirical data as a means to further the inquiry into what the nature of the formal grammar is, or how it is acquired. Pamela Price Klebaum On Tue, 13 Oct 1998, John Myhill wrote: > I've been quite interested to read here about specific findings regarding > how people learn language. As I have been observing the linguistic development > of my now 6-year-old daughter, I have been coming to the conclusion that > anyone who thinks that children develop their language by relying more on > rule generalization than on retrival either doesn't have children or > doesn't pay any attention to what they say. I have repeatedly attempted to > use Hebrew morphology playfully to make jokes with Shayna and I have > repeatedly been disappointed that she just doesn't get it, even though she > can use the morphological forms she's already encountered perfectly. I have > struggled mightily to explain to her the subregularities in the Hebrew > morphological system, but this has no effect whatsoever. She transparently > has no clue of the morphological structure of the agglutinative Japanese > forms she uses regularly. > Unfortunately for linguistics, the formalists whose worldview would most > benefit from the findings reported in some of the recent postings here > aren't interested > in experiments which take them out of their armchairs. > John Myhill > From dick at LINGUISTICS.UCL.AC.UK Wed Oct 14 08:42:55 1998 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Wed, 14 Oct 1998 09:42:55 +0100 Subject: storage vs computation Message-ID: The debate so far assumes that all children follow the same strategic balance between storage and computation, but it's at least possible that different children have different strategies. E.g. in initial literacy, some thrive on phonics (computation) while others do better on whole-word learning (storage). I suspect this doesn't resolve the debate - I have the impression from my amateurish reading that *all* children hit the overgeneralisation stage (where "goed" replaces "went"), and maybe hit it at the same age (relative to other things such as vocab size). I'd be interested to hear from someone who knows about individual variation. ============================================================================== Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From gthomson at GPU.SRV.UALBERTA.CA Wed Oct 14 07:07:35 1998 From: gthomson at GPU.SRV.UALBERTA.CA (Greg Thomson) Date: Wed, 14 Oct 1998 10:07:35 +0300 Subject: storage and computation In-Reply-To: <19981013.113410.TWRIGHT@ACCD.EDU> Message-ID: At 11:34 -0500 10-13-1998, Tony A. Wright wrote: >John Myhill wrote: >Children notoriously regularize high-occurrence irregular verbs, i.e., "goed" >for "went". On the other hand, the simple, old-fashioned idea of "regularization", replacing an exception with a form based on the application of a regular rule, doesn't predict attested forms of the sort: _It got brokeded_ (meaning "It got broken"). You probably wouldn't want to say that the child uttering _brokeded_ has added an uninflected root _broked_ to her lexicon to which she is now adding the suffix -ed. (I won't vouch for the exact example, but I've seen things like that--maybe it was "tookeded".) This really looks more like the result of the convergence on a particular form in response to various pressures in the system. This could be called "computation" but may also hint that the dichotomy of storage vs. computation is already a bit off track. Greg Thomson From dick at LINGUISTICS.UCL.AC.UK Wed Oct 14 10:07:09 1998 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Wed, 14 Oct 1998 11:07:09 +0100 Subject: storage versus computation Message-ID: A message from Liz Bates, for which I'm just acting as postman: I won't be able to send this reply to the funknet, because I'm in Rome for the month and they won't let me "in" from this foreign site, believe it or not! By I have two quick reactions to your note: 1) There is no such thing as an overgeneralization stage, where "goed" replaces "went". This is one point on which Pinker and I agree, because the data for English are crystal clear no matter what horse you are betting on: children typically go from of error-free production of a handful of (probably rote?) high freuency irregular forms, to a long long phase of OCCASIONAL overgeneralizations. But "goed" always coexists with "went" in every child who has ever been studied. The highest rate of over-generalization on public record was by Stan Kucjac's son Abe (I'm probably spelling Stan's name wrong, again, by the way), who reached a high of 50% overgeneralization. Most kids are in the 10-17% range. By the way, my daughter was rather peculiar: she starting produces an interchangeable array of overgeneralizations vs. correct regular past tense forms without ever passing through a rote irregular phase. That is, there was no "went" before "goed". Just an unmarked "go" until such a time as the past tense started to be marked, whereupon the vacillation began. 2) In all of this discussion of storage vs. rules (a.k.a. rote vs. rules) people seem to be unaware of the third possibility: analogy. A single device based on analogy (generalization from partial similarity) can give you both the rote-looking behaviors and the rule-like overgeneralizations that are assumed by two-mechanism theories. This is, of course, the basis of connectionist claims, since networks are essentially analogy-making device that operate across distributed representations. Whether or not children are neural nets is another question, but it is important to at least be open to the LOGICAL possibility of a third solution that is neither rote nor rules. Pinker has indirectly acknowledged this in the most recent version of his theory: he still insists on two mechanisms, and one of them makes rules without regard for internal similarity or freuency, but the other one is an analogy-making devices. That's required in order to account for 'irregularizations' (e.g. children who make "pat" the past tense of "put" and so on, generalizing from an irregular form). In this regard, it is important to note that any of three different sources of similarity are sufficient to support novel generalizations in an analogy-making device: (1) similarity in physical form (e.g. wug --> rug --> rugs --> wugs), (2) similarity of meaning (e.g. wug --> little animal --> little animals --> wugs), or (3) common fate or similarity in contexts of occurrence (e.g. "wug" appears in a discourse slot that seems to be occupied by a class of items that a LINGUIST would call "nouns", so do the nouniest thing with them....). There are existing simulations showing that any of these three sources of similarity can give rise to novel overgeneralizations in a neural network. If you think this is helpful, feel free to pass it on to Funknet, but I'm happy to stick with a private interchange too. -liz ============================================================================== Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From lmenn at CLIPR.COLORADO.EDU Wed Oct 14 15:27:12 1998 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn) Date: Wed, 14 Oct 1998 09:27:12 -0600 Subject: storage/computation Message-ID: Menn & MacWhinney's 1984 article on the 'repeated morph constraint' (Language 60:3, 519-541) argues for both storage and on-line computation of bound grammatical morphemes; the article 'Structure and use in the acquisition of word formation' by Eve Clark and Ruth Berman, in the same issue, is relevant to interpreting Myhill's observations on Hebrew. Lise Menn Lise Menn Professor and Chair Linguistics Department - Box 295 University of Colorado Boulder CO 80309-0295 303-492-8042; fax 303-492-4416 BEWARE PROCRUSTES BEARING OCCAM'S RAZOR From jtang at COGSCI.BERKELEY.EDU Wed Oct 14 19:47:06 1998 From: jtang at COGSCI.BERKELEY.EDU (Joyce Tang Boyland) Date: Wed, 14 Oct 1998 12:47:06 -0700 Subject: lg as lists vs lg as skill Message-ID: The current discussion, despite the theoretical leanings of the participants, seems to retain the presupposition that language consists of lists of words, lists of constructions, lists of sentences. This is a useful presupposition for the purpose of discussing language as a formal system, which has its place; but I would like to put up for consideration the idea that language be thought of not as a set of lists, but as a skill. Empirical language acquisition researchers may not be thrilled by the simple theory from skill acquisition that all skills start out as computed and then are stored, but ACT-R (the current version of John Anderson's theory of skill acquisition) is set up to handle more complex transitions between stored and computed representations. For example one can learn a rule from stored examples, but one also then turns instantiations of rules into stored examples. A form like `brokeded' would be straightforwardly explained in ACT-R, as a case of a rule being applied to a form that got stored after being used previously. A lesson that ACT-R teaches us, I think, is that, well, sometimes you need to compute a form and sometimes you need to retrieve it from storage, but where these forms come from is not what we really care about; what we are really trying to explain is the skill (language using) that these forms serve. And as you apply the skill you just happen to do many things by rote and some things by computation. Something this view buys us is that language competence (as in accepting vs. rejecting a string) isn't the main thing, with performance being the by-product; rather, the performance is the main thing and the competence (the lists of accepted vs. rejected strings) is the by-product. I'm just beginning a project (with Eric Scott and perhaps John Anderson) to model the creation of collocations as a product of skill learning in ACT-R,. to complement my recent work on long-term syntactic priming (evidence that stored forms are used in production). Joyce Tang Boyland Joyce.Tang.Boyland at alverno.edu Alverno College Milwaukee, WI 53234-3922 >> >> On the other hand, the simple, old-fashioned idea of "regularization", >> replacing an exception with a form based on the application of a regular >> rule, doesn't predict attested forms of the sort: _It got brokeded_ >> (meaning "It got broken"). You probably wouldn't want to say that the child >> uttering _brokeded_ has added an uninflected root _broked_ to her lexicon >> to which she is now adding the suffix -ed. (I won't vouch for the exact >> example, but I've seen things like that--maybe it was "tookeded".) This >> really looks more like the result of the convergence on a particular form >> in response to various pressures in the system. This could be called >> "computation" but may also hint that the dichotomy of storage vs. >> computation is already a bit off track. >> >> Greg Thomson >> >> Barsalou via Tomasello: >> Finally, this tradeoff between computation and storage is manifest in many >> current theories of skill. Essentially, novices are viewed as having >> stored few cases, and so have to compute, whereas experts are viewed as >> having stored many cases, and so don't have to compute (they just >> retrieve). This goes back to Chase and Simon's work on chunking and chess, >> and it can be found in the modern theories of John Anderson (ACT*), Gordon >> Logan (exemplar-based skill model), and Alan Newell (SOAR). Each includes >> two ways of producing a behavior--computation vs. retrieval--and assumes >> that novices mostly compute but increasinly retrieve as they become >> expert. >> From macw at CMU.EDU Wed Oct 14 23:34:12 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Wed, 14 Oct 1998 17:34:12 -0600 Subject: rote vs rules In-Reply-To: <199810141947.MAA00233@cogsci.Berkeley.EDU> Message-ID: A few further comments on the current discussion of automaticity: 1. Suzanne Kemmer's question was never answered. She asked why rules don't get applied to frequent forms, if they are so computationally efficient. The answer, I would suggest, is that computational efficiency is defined over the whole system, not just the individual form. You don't save in terms of time to produce "jumped". However, you don't have to store all those pesky regular forms and, since the rules are running all the time anyway, you "get jumped for free". Of course, the real problem here is that evidence for a cycle of rules of the SPE type is nonexistent. So language-as-rules people like Pinker decided to give up the battle for generating forms from minor rules and staked their claim on a defense of what I call "kinder gentler rules" such as "add -ed". 2. The analysis that Liz and others have proposed is basically what MacWhinney 1978 and then Menn and MacWhinney 1983 offered as the three-factor account based on rote, analogy, and combination. Connectionism came along in the 1980s and showed how analogy works. Rote is obviously alive and well. Combination has taken a few hits, but is probably not down for the count. It will get resurrected when connectionist models become more neuronally realistic. I don't think that we will ever really need rules. In fact, I doubt that Larry Barsalou thinks we need rules of the SPE/cycle variety. 3. I agree with Joyce that language is a skill. However, the devil is in the details. If we fail to recognize the fundamental difference between word learning and syntactic automatization, I am worried that we could go down some false paths. The routinization of the word is supported by a tightly predictive association between audition and articulation. When we hear a new auditory form, it appears that we use the phonological loop on some level to store it. As we then attempt to match to this form using our own articulations, we convert a resonant auditory form to an entrenched articulatory form. Work by Baddeley, Gupta, Cowan and others has taught us a great deal about the details of this process. Yes, you can use ACT-R to model this, but you will be using a restricted subset of ACT-R and the process of deciding what restrictive subset is applicable is the whole of the scientific process of understanding the neuronal mechanics of word learning. Trying to use a model of word learning as the basis for understanding the automatization of syntactic patterns strikes me as quite problematic. The central problem is that predicates have open slots for arguments. Words, as Wally notes, are largely slot-free (of course there are exception, such as infixes etc.). I tend to think of this level of skill automaticity in terms of Michael Jordan faking out Karl Malone in the last points of the final game of the NBA finals. Jordan clearly has a flexible set of plans for dunking the ball into the basket against the opposition of a defender. What is automatic in his actions is the move from one state to the next. The skill is in the transitions. It strikes me that sentence production is like this and that word level articulation is basically not. Saying that we have stored syntactic frames tends to obscure this contrast. The claim is typically grounded on results from a nice set of studies from Bock and her colleagues. But I would suggest that these studies do not demonstrate syntactic persistence, but rather lexical persistence produces priming of closely competing syntactic options. Barbara Luka presented a nice paper on syntactic persistence at CSLD-4 and mentioned work by Joyce demonstrating similar effects. However, I don't think this work has yet yielded a clear view of what syntactic persistence really might be. Is it a genre effect? Does it involve a passive taperecorder that influences acceptability, but has no direct effect on production? Is it really lexically driven? Many questions remain. I would say that the delineation of the contrast between lexical and syntactic automaticity and productivity should be a top-level research agenda item for functionalists and psycholinguists alike. The great thing about all of this is that the issues are easily open to experimentation and modeling. And, as Joan Bybee, Tom Givon, and others have been showing, they make clear predictions regarding typology and change. --Brian MacWhinney From David_Tuggy at SIL.ORG Wed Oct 14 00:05:00 1998 From: David_Tuggy at SIL.ORG (David_Tuggy at SIL.ORG) Date: Tue, 13 Oct 1998 19:05:00 -0500 Subject: Storage and computation Message-ID: Deborah Ruuskanen wrote: "Machine translation has tried to use ever[y] larger memories to store and retrieve translations once made and match them against translations to be made: this method simply does not work if the translation to be done is not an *exact* match. So much for retrieval. However, if it *is* an exact match, then retrieval saves mucho time ..." So much for machine retrieval, perhaps. But do we need to posit that humans doing retrieval are as literal-minded as computers are? It would seem self-evident that we excel at non-exact matching. Maybe another way to say it is that we apparently prefer to add a little computation to our retrieval system to make it much more flexible and efficient, rather than to invest a great deal more computation starting over from scratch. Once again, if we set up computation and retrieval as either/or alternatives, we're setting ourselves on the wrong track. People do both, and typically at the same time. Certainly things are weighted, as Bill Croft and Wally Chafe and others have been saying, much more heavily towards retrieval than the "generative" metaphor would lead you to expect. A closely related issue: what are we matching, anyway? Probably something vastly different from the patterns of 0's and 1's or higher-level letters that are all the computer knows. What is not an exact phonological (much less phonetic) or even lexico-syntactic match may be much more nearly an exact match of the somewhat sloppy semantic stuff we're usually primarily comparing in translation. Even the phonological and lexico-semantic stuff is almost certainly not be stored in as rigid a form as a computer would do it. --David Tuggy ______________________________ Reply Separator _________________________________ Subject: Storage and computation Author: druuskan at CC.HELSINKI.FI at internet Date: 10/13/98 10:07 AM From eleanorb at HUMAN.TSUKUBA.AC.JP Thu Oct 15 00:22:24 1998 From: eleanorb at HUMAN.TSUKUBA.AC.JP (Eleanor Olds Batchelder) Date: Thu, 15 Oct 1998 09:22:24 +0900 Subject: Storage vs. computation Message-ID: Is it just a coincidence that this (fascinating) discussion is taking place shortly before the UTRECHT CONGRESS ON STORAGE & COMPUTATION IN LINGUISTICS next week? Certainly the terminology is the same - as opposed to, say, rote vs. rule. Since this topic is the one most central to my work just now, I am very sorry I cannot attend that conference. Would anyone who is going be willing to send back reports to the rest of us? Eleanor From jkyle at EAGLE.CC.UKANS.EDU Thu Oct 15 18:42:41 1998 From: jkyle at EAGLE.CC.UKANS.EDU (John Kyle) Date: Thu, 15 Oct 1998 13:42:41 -0500 Subject: sound symbolism and features In-Reply-To: Message-ID: An interesting example of sound symbolism occurs in many of the Siouan languages. Boas and Deloria's Dakota Grammar (1941) has several pages of examples (pp16-18) where the 'degrees' of a verb are differentiated with the use of different fricatives. They note that this is not an active process and the meanings are not always predictable. In my notation below; [s^] is a vless palatal fric, [z^] is the voiced pal fric, [x] is vless velar fric, [g^] is voiced velar fric, also, nasal vowels are shown with an [n] following ([in] = nasal [i]). I've only listed a few here, they list many more. sapa black s^apa soiled xapa grey winz^a bent w/out breaking (i.e. twig) wing^a bent at a sharp angle ptuza it is bent forward ptuz^a small pieces cracked off w/out falling off ptug^a " " " " but fall off woptux'a crumbs izuza whetstone ig^ug^a rough sandstone nuza soft and movable (a swollen gland) nuz^a same but harder (cartilage) nug^a hard like a callus on bone, gnarl on a tree I hope these help. At least they're interesting. Bob Rankin also informs me that many of the Muskogee languages do this also. John Kyle On Tue, 13 Oct 1998, Johanna Rubba wrote: > Hi, everybody. I've been following the computation/storage discussion with > interest. I have a question in a different area. > > I'm teaching a grad intro ling course for people interested mainly in > literature, and I do a lot of ling. analysis of lit. in this class (which > is loads of fun, by the way!) We're just finishing our unit on phonology > and I've been cruising around the web for stuff on sound symbolism. Maybe > some of you know of some sources on a very specific area I am interested > in: the correlation of particular _distinctive features_ with properties > in other sensory domains (e.g. of +continuant with 'smooth' or 'velarized' > with 'dark'). I know that _segments_ have received lots of attention, and > I've seen some initial signs of work with features on commercial websites > (creators of corporate names and brand names). Does anyone know of work > that seeks empirical confirmation of cross-modal associations for > particular _features_ rather than segments (by, for example, manipulating > feature makeup of sounds/words and surveying scientifically-sound subject > pools for consistency of association)? Work being done across cultures > would, of course, be really interesting. > > Thanks for any leads you can offer! > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Johanna Rubba Assistant Professor, Linguistics ~ > English Department, California Polytechnic State University ~ > San Luis Obispo, CA 93407 ~ > Tel. (805)-756-2184 Fax: (805)-756-6374 ~ > E-mail: jrubba at polymail.calpoly.edu ~ > Home page: http://www.calpoly.edu/~jrubba ~ > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > From dquesada at CHASS.UTORONTO.CA Fri Oct 16 03:28:11 1998 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Thu, 15 Oct 1998 23:28:11 -0400 Subject: Job: University of Toronto (fwd) Message-ID: ---------- Forwarded message ---------- Date: Thu, 15 Oct 1998 22:46:58 -0400 (EDT) From: Hispanic Linguistics UofT To: linguist at listserv.linguistlist.org Subject: Job: University of Toronto PLEASE POST Tenure-track position in Hispanic linguistics and language at Assistant Professor level, effective 1 July 1999. Required: Ph.D. in Spanish or Linguistics; committment to research and scholarly publication in theoretical linguistics, with a strong secondary interest in applied linguistics and second-language pedagogy; willingness to develop and supervise introductory courses in the Spanish language sequence; native or near native proficiency in Spanish. Experience in multimedia computer technology is an asset. Applications must be received by December 4, 1998. Please send a letter of application and curriculum vitae and arrange to have three letters of reference sent to Professor Stephen Rupp, Chair, Department of Spanish and Portuguese, University of Toronto, Toronto, Ontario, Canada M5S 1A1. In accordance with Canadian immigration requirements, this advertisement is directed to Canadian citizens and permanent residents of Canada. In accordance with its employment equity policy, the University of Toronto encourages applications from qualified women or men, members of visible minorities, aboriginal peoples and persons with disabilities. Visit the Hispling Toronto site at: http://www.chass.utoronto.ca/spanish_portuguese/hispling.html Hispanic Linguistics University of Toronto From kfeld at CITRUS.UCR.EDU Fri Oct 16 05:23:52 1998 From: kfeld at CITRUS.UCR.EDU (David B. Kronenfeld) Date: Fri, 16 Oct 1998 00:23:52 -0500 Subject: Larry Barsalou's note via Mike Tomasello Message-ID: In reference to Larry Barsalou's very interesting information regarding exemplars vs. prototypes, it is worth noting that the exemplar/prototype distinction has not been made within much semantic work in anthropology, where the contrast has been "prototype" ("kernel" or "core")-based definitions vs. distinctive feature definitions of whole categories. Discussions of "prototypes" in anthropology may really, to a greater or lesser degree, pertain to Barsalou's exemplars; it will be necessary to consider the ways that the "prototypes" in question are actually defined and used in any given case to determine how they relate to the exemplar/prototype distinction. I offer this observation because there seems some possibility of useful insights coming from both directions, and it would be a shame if such exchange were short-circuited by a labeling glitch. From haspelmath at EVA.MPG.DE Fri Oct 16 13:23:23 1998 From: haspelmath at EVA.MPG.DE (Martin Haspelmath) Date: Fri, 16 Oct 1998 13:23:23 +0000 Subject: storage/retrieval/computation Message-ID: I'd like to get back to Bill Croft's original question regarding parsimony of storage/retrieval/computation. I have two comments: First, it should be noted that the claim that "storage is easier than computation" presupposes some implicit comparison. If we say that "kids learn huge numbers of words very quickly, but they're very slow to learn general constructional patterns" (Suzanne Kemmer), how do we judge that the first is "quick", the second "slow" -- by what standards? It seems to me that the implicit comparison is to a large extent the conventional serial computer. Compared to computers, humans are bad at processing and good at storage. Even ten years ago (let alone today), computers were able to carry out a much larger number of successive operations per second than humans. Since the "cognitivist revolution" in the 1950s, the serial computer has served as an important point of reference for understanding human cognition. Second, Bill Croft opposed storage/retrieval to computation, but it isn't clear to me that this is the right contrast. Maybe it's mainly storage that is easy, whereas processing in general, both retrieval and computation, are more difficult. That would allow us, for instance, to preserve Haiman's argument (in "Natural syntax", CUP 1985) that the massive polysemy we find in language is economically motivated. After all, if we're so good at storage, why should we need to economize in the lexicon? Humans can easily learn 5-10 languages (if given the opportunity), so why not have a lexicon that is 5-10 times as large? Maybe the motivation is processing parsimony after all. In a huge lexicon, retrieval can be quite difficult, even though there is enough storage space. All this is pure speculation, of course, and it would be interesting to see whether the psychologists have anything to say about it. --Martin Haspelmath -- Martin Haspelmath (haspelmath at eva.mpg.de) Max-Planck-Institut fuer evolutionaere Anthropologie, Inselstr. 22 D-04103 Leipzig (Tel.+49-341-9952 307) -- Dr. Martin Haspelmath (haspelmath at eva.mpg.de) Max-Planck-Institut fuer evolutionaere Anthropologie, Inselstr. 22 D-04103 Leipzig (Tel. (MPI) +49-341-9952 307, (priv.) +49-341-980 1616) From macw at CMU.EDU Fri Oct 16 17:39:14 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Fri, 16 Oct 1998 11:39:14 -0600 Subject: exemplars and prototypes Message-ID: Regarding David Kronenfeld's note on exemplars and prototypes and the possibility of terminological slippage, let me say that the distinction is fairly clear in the psychological literature. An exemplar is a specific real-world instance, i.e. a particular dog or a particular candle. A prototype is a merger of the best or common features of the many exemplars. David is referring to the contrast in cognitive anthropology between featural theory and prototype theory. This contrast also exists in psychology and many papers have been written arguing for one or the other, but no one really challenges the potential relevance of exemplars during the initial phases of induction. The issue is whether the role of exemplars in the final system is secondary and peripheral or major and central. In any case, I don't sense any terminological slippage. Instead, I think there is a basic disagreement in both fields regarding (1) the relative importance of exemplars and (2) the decision to opt for feature theory vs. prototype theory. The range of my reading in cognitive anthropology is fairly restricted, so I am happy to stand corrected on this. --Brian MacWhinney From noonan at CSD.UWM.EDU Fri Oct 16 16:01:26 1998 From: noonan at CSD.UWM.EDU (Michael Noonan) Date: Fri, 16 Oct 1998 11:01:26 -0500 Subject: Job at University of Wisconsin-Milwaukee In-Reply-To: Message-ID: Assistant Professor in TESOL pedagogy and contrastive rhetoric. Tenure track, to begin Fall 1999. We seek an outstanding scholar/teacher who will actively contribute to our graduate programs in Linguistics/TESOL and Composition/Rhetoric, as well as a professional writing program and an undergraduate English composition program which responds to the needs of an ethnically and linguistically diverse population. Experience in developing and/or administering a university-level TESOL program preferred. We plan to interview at MLA. Send letter of application and CV only to Michael Noonan, Chair, Dept. of English, University of Wisconsin-Milwaukee, Milwaukee, WI 53201, postmarked no later than January 10. Questions about the position, the department, and UWM may be addressed to Michael Noonan . AA/EO From lakoff at COGSCI.BERKELEY.EDU Fri Oct 16 16:50:09 1998 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Fri, 16 Oct 1998 09:50:09 -0700 Subject: Exemplars and prototypes Message-ID: Re: David Konenfeld's and Brian MacWhinney's remarks on exemplars and prototypes: The place to find out about examplars versus prototypes is in my Women, Fire and Dangerous Things, Chapters 2 through 7. There are many different types of prototypes, each with different inference patterns (e.g., typical cases, ideal cases, social stereotypes, centers of radial categories, etc.) and various types of exemplars, again with different inference patterns (e.g., paragons, salient examplars, antiparagons, etc.). Psychologists have been fairly sloppy in not distinguishing among the different logical types -- largely,I think, because they tend not to study inferences. I find this fairly bizarre, because inferences are what reasoning is about. George Lakoff From kfeld at CITRUS.UCR.EDU Fri Oct 16 16:48:42 1998 From: kfeld at CITRUS.UCR.EDU (David B. Kronenfeld) Date: Fri, 16 Oct 1998 11:48:42 -0500 Subject: exemplars and prototypes Message-ID: In case it's useful, let me briefly respond to Brian MacWhinney's helpful comments. At 11:39 AM 10/16/98 -0600, Brian MacWhinney wrote: > Regarding David Kronenfeld's note on exemplars and prototypes and the >possibility of terminological slippage, let me say that the distinction is >fairly clear in the psychological literature. An exemplar is a specific >real-world instance, i.e. a particular dog or a particular candle. A >prototype is a merger of the best or common features of the many exemplars. This is helpful to me, at least. >David is referring to the contrast in cognitive anthropology between >featural theory and prototype theory. Within cognitive anthropology is where the prototype/exemplar distinction seems not much to be made. Some versions of prototype theory here, coming off of Rosch's work in psychology, do make use of "prototypes" in Brian's sense; but others (including my own "extentionist semantics" approach) speak of "prototypes" in a sense that is much closer to Brian's sense of "exemplars". I have sometimes used "prototypic referent" (as opposed to a "typical" one) as a way of describing a referent or exemplar that is key (as opposed to some kind of average or most frequent referent)--where I do discuss the basis behind this key role for the given referent. "Core" and "kernel" are also used to characterize such key referents. > This contrast also exists in >psychology and many papers have been written arguing for one or the other, >but no one really challenges the potential relevance of exemplars during >the initial phases of induction. The issue is whether the role of >exemplars in the final system is secondary and peripheral or major and >central. Yes. > In any case, I don't sense any terminological slippage. Instead, I think >there is a basic disagreement in both fields regarding (1) the relative >importance of exemplars and (2) the decision to opt for feature theory vs. >prototype theory. Yes. All I meant was that the major focus within anthropological discussions has been on the opposition between feature models and focal referent models; the prototype vs. exemplar distinction within focal models has not much been raised. > The range of my reading in cognitive anthropology is fairly restricted, >so I am happy to stand corrected on this. > >--Brian MacWhinney > Thanks for the information. The exemplar/prototype distinction seems quite helpful. David Kronenfeld From macw at CMU.EDU Fri Oct 16 19:32:04 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Fri, 16 Oct 1998 13:32:04 -0600 Subject: exemplars and prototypes Message-ID: I agree with David and George's comments. I particularly agree with George's observation that psychologists have failed to introduce the needed additional terminology to deal with the different logical possibilities. Of course categorization people like Hintzman, Kruschke, Nosofsky, and maybe even Barsalou might argue that one doesn't know that an exemplar is a paragon during early acquisition, and that it only becomes a paragon after the pool of exemplars is given prototype structure. But all these further distinctions nicely facilitate thinking and theory, as George is saying. One distinction that may help David a bit is the contrast between the abstract prototype (the statistical mean of the features) and an instantiated prototype (perhaps something like George's paragon -- i.e. the robin as an "ideal" bird). --Brian MacWhinney From Ziv at HUM.HUJI.AC.IL Sun Oct 18 03:35:00 1998 From: Ziv at HUM.HUJI.AC.IL (Ziv Yael) Date: Sat, 17 Oct 1998 20:35:00 PDT Subject: Amnesty International Message-ID: ------------------------------------------------------------------------------ FORWARDED FROM: Ziv Yael Return-Path: Date: Thu, 15 Oct 1998 09:35:10 +0000 (GMT) From: Steve Nicolle Subject: Amnesty International To: relevance at linguistics.ucl.ac.uk Reply-To: S.Nicolle at mdx.ac.uk Message-Id: Organization: Middlesex University Mime-Version: 1.0 X-Mailer: Pegasus Mail for Windows (v2.53/R1) Content-Type: text/plain Content-Transfer-Encoding: 7BIT Priority: normal content-length: 964 Dear Friends, To celebrate the 50th Anniversary of the Universal Declaration of Human Rights, Amnesty International is collecting signatures for a pledge to support this very important United Nations declaration. Amnesty already has 3 million signatures (real and virtual) world wide, and wants 8 million (which would be a significant proportion of the world's population of around 6 billion). The UN Secretary General has already agreed to be present either in person or live by satellite to receive the pledge as a tangible statement of the people of the world's commitment to an international agenda of human rights. The most simple way to add your name to the pledge is to: Send an e-mail to udhr50th at amnesty.org.au Put YOUR NAME in the SUBJECT Put the following text in the message: 'I support the rights and freedoms in the Universal Declaration of Human Rights for all people, everywhere'. PLEASE FORWARD THIS MESSAGE TO AS MANY PEOPLE AS YOU CAN. From kfeld at CITRUS.UCR.EDU Sat Oct 17 20:39:57 1998 From: kfeld at CITRUS.UCR.EDU (David B. Kronenfeld) Date: Sat, 17 Oct 1998 15:39:57 -0500 Subject: George Lakoff on Exemplars and prototypes Message-ID: At 09:50 AM 10/16/98 -0700, George Lakoff wrote: ... > >There are many different types of prototypes, each with different inference >patterns >(e.g., typical cases, ideal cases, social stereotypes, centers of radial >categories, etc.) and >various types of exemplars, again with different inference patterns (e.g., >paragons, salient examplars, antiparagons, etc.). In the sense that this list represent ways in which some item can be focal within the set of referents of some expression, and thus represents kinds of focality that we should be aware of in our research, I agree. At the same time, though, I want to warn against the possibility of taking this list (others like it) as too directly representing the distinctions that inhere in the phenomena themselves. Until we have a better understanding of the reasoning processes involved, including the underlying abilities and perceptions these reasoning processes build on, such a taxonomy seems premature. Relevant here is the question concerning the degree to which we sort potential referents into some pre-existing kinds of relational categories vs. construct our representations of relationships among referents in some more constructivist or even ad hoc manner. > >Psychologists have been fairly sloppy in not distinguishing among the >different logical types -- largely,I think, because they tend not to study >inferences. I find this fairly bizarre, because inferences are what >reasoning is about. > Yes ! David Kronenfeld From lili at IVY.NENU.EDU.CN Sun Oct 18 01:52:44 1998 From: lili at IVY.NENU.EDU.CN (lili) Date: Sun, 18 Oct 1998 09:52:44 +0800 Subject: RELATIONAL PROCESS Message-ID: Hi! Can anybody tell me the way to prove that there is a strong link between the quantities of information ( how much meaning the speaker intended to convey, which is represented through the text) and the real intention of the speaker (the intened bias of the speaker, he may be likely to cheat)? Namely, whether the amount of information has something to do with the reporter's likes or dislikes? This question seems to have a lot to do with the critical linguistics which takes a lot resources from the functional grammar. Thanks!! Li Li Foreign Languages Institute of Northeast Normal University Changchun, Jilin P.R. China Code:130024 Tel: 86-0431-5649962 Leonard_li at bigfoot.com From David_Tuggy at SIL.ORG Sat Oct 17 18:13:00 1998 From: David_Tuggy at SIL.ORG (David_Tuggy at SIL.ORG) Date: Sat, 17 Oct 1998 13:13:00 -0500 Subject: Larry Barsalou's note via Mike Tomasello Message-ID: The same is true in quite a bit of "Cognitive linguistics" work (including some I have written)--what is billed as "prototype" categorization is reacting to distinctive-feature or strict-boundary categorization, and does not have the prototype/exemplar distinction in mind. --David Tuggy ______________________________ Reply Separator _________________________________ Subject: Larry Barsalou's note via Mike Tomasello Author: kfeld at CITRUS.UCR.EDU at internet Date: 10/16/98 12:23 AM In reference to Larry Barsalou's very interesting information regarding exemplars vs. prototypes, it is worth noting that the exemplar/prototype distinction has not been made within much semantic work in anthropology, where the contrast has been "prototype" ("kernel" or "core")-based definitions vs. distinctive feature definitions of whole categories. Discussions of "prototypes" in anthropology may really, to a greater or lesser degree, pertain to Barsalou's exemplars; it will be necessary to consider the ways that the "prototypes" in question are actually defined and used in any given case to determine how they relate to the exemplar/prototype distinction. I offer this observation because there seems some possibility of useful insights coming from both directions, and it would be a shame if such exchange were short-circuited by a labeling glitch. From David_Tuggy at SIL.ORG Sat Oct 17 20:43:00 1998 From: David_Tuggy at SIL.ORG (David_Tuggy at SIL.ORG) Date: Sat, 17 Oct 1998 15:43:00 -0500 Subject: exemplars and prototypes Message-ID: Must exemplars be "specific real-world instance[s], i.e. a particular dog or a particular candle."? Couldn't the non-particular, less-specific concept GERMAN SHEPHERD be one of the exemplars for a category such as DOG, or VOTIVE CANDLE IN A SMALL GLASS be an exemplar for CANDLE? I can't (at least consciously) recall any particular such candle, yet I would have said that kind of candle was, for me, an exemplar of the category. How would you tell for sure if a person in a psycholinguistic experiment was responding to the concept MY (GERMAN SHEPHERD) DOG DUCHESS or to DUCHESS AND DOGS LIKE HER? Couldn't a non-real-world, and generic, concept such as WOOKIE be an exemplar for ALIEN RACE? In a sense, even something as specific as MY DOG DUCHESS isn't fully specific, but is "a merger of the best or common features of the many exemplar[y]" experiences I had of Duchess. (Sorry--I'm trying to learn how these words are being used. But I suspect some others might have the same questions.) --David Tuggy ______________________________ Reply Separator _________________________________ Subject: exemplars and prototypes Author: macw at CMU.EDU at internet Date: 10/16/98 12:39 PM Regarding David Kronenfeld's note on exemplars and prototypes and the possibility of terminological slippage, let me say that the distinction is fairly clear in the psychological literature. An exemplar is a specific real-world instance, i.e. a particular dog or a particular candle. A prototype is a merger of the best or common features of the many exemplars. David is referring to the contrast in cognitive anthropology between featural theory and prototype theory. This contrast also exists in psychology and many papers have been written arguing for one or the other, but no one really challenges the potential relevance of exemplars during the initial phases of induction. The issue is whether the role of exemplars in the final system is secondary and peripheral or major and central. In any case, I don't sense any terminological slippage. Instead, I think there is a basic disagreement in both fields regarding (1) the relative importance of exemplars and (2) the decision to opt for feature theory vs. prototype theory. The range of my reading in cognitive anthropology is fairly restricted, so I am happy to stand corrected on this. --Brian MacWhinney From macw at CMU.EDU Sun Oct 18 07:17:07 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Sun, 18 Oct 1998 01:17:07 -0600 Subject: exemplars and prototypes In-Reply-To: <199810180430.XAA17801@listserv.rice.edu> Message-ID: --On Sat, Oct 17, 1998 3:43 PM -0500 David_Tuggy at SIL.ORG wrote: > Must exemplars be "specific real-world instance[s], i.e. a particular > dog or a particular candle."? Couldn't the non-particular, > less-specific concept GERMAN SHEPHERD be one of the exemplars for a > category such as DOG, or VOTIVE CANDLE IN A SMALL GLASS be an > exemplar for CANDLE? Sure, that's fine. That would be a subordinate category serving as a part of the database for a superordinate. But it is not what exemplar theories in psychology are assuming. They are assuming some real German Shepherd, not the union of the features of all German Shepherds you have met. Exemplars are real things. Like the third votive candle from the left in my cupboard -- the one with the green tinge and heavy base. By the way, don't all votive candles end up being in small glasses? --Brian MacWhinney From kfeld at CITRUS.UCR.EDU Sun Oct 18 05:46:37 1998 From: kfeld at CITRUS.UCR.EDU (David B. Kronenfeld) Date: Sun, 18 Oct 1998 00:46:37 -0500 Subject: George Lakoff on Exemplars and prototypes Message-ID: At 09:50 AM 10/16/98 -0700, George Lakoff wrote: ... > >There are many different types of prototypes, each with different inference >patterns >(e.g., typical cases, ideal cases, social stereotypes, centers of radial >categories, etc.) and >various types of exemplars, again with different inference patterns (e.g., >paragons, salient examplars, antiparagons, etc.). In the sense that this list represent ways in which some item can be focal within the set of referents of some expression, and thus represents kinds of focality that we should be aware of in our research, I agree. At the same time, though, I want to warn against the possibility of taking this list (or others like it) as too directly representing the distinctions that inhere in the phenomena themselves. Until we have a better understanding of the reasoning processes involved, including the underlying abilities, perceptions, and presuppositions these reasoning processes build on, such a typology seems premature. Relevant here is the question concerning the degree to which we sort potential referents into some pre-existing kinds of relational categories vs. construct our representations of relationships among referents in some more constructivist or even ad hoc manner. > >Psychologists have been fairly sloppy in not distinguishing among the >different logical types -- largely,I think, because they tend not to study >inferences. I find this fairly bizarre, because inferences are what >reasoning is about. > Yes ! David Kronenfeld From chafe at HUMANITAS.UCSB.EDU Sun Oct 18 23:29:19 1998 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Sun, 18 Oct 1998 16:29:19 -0700 Subject: exemplars and prototypes In-Reply-To: <479289.3117662227@jubilation.psy.cmu.edu> Message-ID: On Sun, 18 Oct 1998, Brian MacWhinney wrote: > Exemplars are real things. Like the third votive candle from the left in > my cupboard -- the one with the green tinge and heavy base. It seems to me that the distinction here should not be based on "reality" (whatever that is) vs. unreality, but on particular instances vs. categories. Imagined entities can be particulars just as well as real ones. Moby Dick is as particular a whale as Willy. Wally Chafe From susan at LING.UTA.EDU Wed Oct 21 21:26:26 1998 From: susan at LING.UTA.EDU (Susan Herring) Date: Wed, 21 Oct 1998 16:26:26 -0500 Subject: New Ph.D. program in Linguistics Message-ID: * NEW DOCTORAL PROGRAM IN LINGUISTICS * * AND OPPORTUNITIES FOR STUDENT SUPPORT * * OFFERED AT THE UNIVERSITY OF TEXAS AT ARLINGTON* The University of Texas at Arlington (UTA) announces the availability of four supported doctoral positions for new students entering the UTA Ph.D. program in Linguistics in Spring 1999. The Ph.D. in Linguistics at UTA, among the newest doctoral programs in linguistics available in the United States, provides students with education and training in a range of specializations, including discourse analysis and text linguistics, sociolinguistics, semantics and translation, and literacy. Special attention is given to the role of field work in linguistic studies, including the study and documentation of lesser-studied languages. Training is also provided in the application of computing methods to linguistic analysis. Supported doctoral positions will be awarded on a competitive basis to new students accepted into the program on or before January 12, 1999. Support will likely take the form of research assistantships (no teaching required) involving contributing to the research activities of the Program in Linguistics according to the program's needs and students' background, interests, and skills. Successful candidates will be guaranteed support for the spring semester, and be eligible for continuing support in subsequent academic years. In addition to the new Ph.D., the Linguistics Program at UTA continues to offer an M.A. in Linguistics as well as a 19-hour Graduate Certificate in TESOL. For further information about graduate study in linguistics at UTA or to request an application for admission and support to the doctoral program, contact the Linguistics Graduate Advisor, Dr. Irwin Feigenbaum, at irwin at ling.uta.edu or (817) 272-3133. Information on degree requirements, faculty, and course offerings is available on the UTA Linguistics web site at http://ling.uta.edu. The University of Texas at Arlington, the second-largest campus in the University of Texas System, is located in the center of the Dallas-Fort Worth Metroplex, a major urban and cultural area in the United States. Information about the University of Texas at Arlington is available at http://www.uta.edu. From eitkonen at UTU.FI Thu Oct 22 10:51:48 1998 From: eitkonen at UTU.FI (Esa Itkonen) Date: Thu, 22 Oct 1998 13:51:48 +0300 Subject: psychological reality Message-ID: The dichotomy 'concrete forms in storage & short computations vs. abstract forms in storage & long computations' was THE issue when the question of psychological reality was discussed in the 70's (and the question is still with us today). Per Linell's 300-page book 'Psychological reality in phonology' (CUP 1979) is devoted to this issue (see e.g. the section on 'Demand for excessive computing'). I don't think it would hurt anybody to have a look at this book, which shows, once again, that very often (although not always) what we would like to see as progress is nothing but ignorance of the past. The dichotomy 'memorization vs. rule-generalization (i.e. analogy)' is a somewhat separate issue because it concerns learning; and something that has first been learned by analogy can later become memorized as such. Esa Itkonen From charon at UCLINK4.BERKELEY.EDU Fri Oct 23 06:57:19 1998 From: charon at UCLINK4.BERKELEY.EDU (charon at UCLINK4.BERKELEY.EDU) Date: Thu, 22 Oct 1998 23:57:19 -0700 Subject: 2nd BLS Call for Papers Message-ID: Please distribute the following announcement to all interested parties. THE BERKELEY LINGUISTICS SOCIETY BLS 25 CALL FOR PAPERS The Berkeley Linguistics Society is pleased to announce its Twenty-Fifth Annual Meeting, to be held February 13-15, 1999. The conference will consist of a General Session and a Parasession on Saturday and Sunday, followed by a Special Session on Monday. **************************************************************************** *** General Session: The General Session will cover all areas of general linguistic interest. Invited Speakers CAROL FOWLER, Haskins Laboratories, Univ. of Connecticut, Yale Univ. STEPHEN LEVINSON, Max Planck Institut für Psycholinguistik, Nijmegen BJØRN LINDBLOM, Univ. of Stockholm and Univ. of Texas, Austin ALEC MARANTZ, Massachusetts Institute of Technology **************************************************************************** *** Parasession: Loan Word Phenomena The Parasession invites papers on loan word phenomena from various theoretical, historical, sociolinguistic, and typological perspectives, as well as descriptive works and field reports. Areas of interest include stratification of the lexicon and loan word 'subgrammars', re-lexification, the role of orthography, markedness effects, second-language acquisition, child language, bilingualism and code-switching, etc. Invited Speakers ELLEN BROSELOW, State University of New York, Stony Brook GARLAND CANNON, Texas A&M University JUNKO ITO & ARMIN MESTER, University of California, Santa Cruz **************************************************************************** *** Special Session: Issues in Caucasian, Dravidian and Turkic Linguistics The Special Session will feature research on Caucasian, Dravidian and Turkic languages. Papers addressing both diachronic and synchronic issues are welcome. Potential topics include theoretical and descriptive accounts of structural features, writing systems and transcription problems, language reform, and the reconstruction of the respective Proto-languages, including the question of Altaic linguistic unity. Invited Speakers LARS JOHANSON, Universität Mainz K.P. MOHANAN, National University of Singapore JOHANNA NICHOLS, University of California, Berkeley **************************************************************************** *** We encourage proposals from diverse theoretical frameworks and welcome papers from related disciplines, such as Anthropology, Cognitive Science, Computer Science, Literature, Philosophy, and Psychology. Papers presented at the conference will be published in the Society's Proceedings, and authors who present papers agree to provide camera-ready copy (not to exceed 12 pages) by May 15, 1999. Presentations will be allotted 20 minutes with 10 minutes for questions. We ask that you make your abstract as specific as possible, including a statement of your topic or problem, your approach, and your conclusions. Please send 10 copies of an anonymous one-page (8 1/2" x 11", unreduced) abstract. A second page, or reverse side of the single page, may be used for data and references only. Along with the abstract send a 3"x5" card listing: (1) paper title, (2) session (general, Parasession, or special), (3) for general session abstracts only, subfield, viz., Discourse Analysis, Historical Linguistics, Morphology, Philosophy and Methodology of Linguistics, Phonetics, Phonology, Pragmatics, Psycholinguistics, Semantics, Sociolinguistics, or Syntax, (4) name(s) of author(s), (5) affiliation(s) of author(S), (6) address to which notification of acceptance or rejection should be mailed (in November 1998), (7) author's office and home phone numbers, (8) author's e-mail address, if available. An author may submit at most one single and one joint abstract. In case of joint authorship, one address should be designated for communication with BLS. Send abstracts to: BLS 25 Abstracts Committee, 1203 Dwinelle Hall, University of California, Berkeley, CA 94720. Abstracts must be received by 4:00 p.m., November 2, 1998. We may be contacted by e-mail at bls at socrates.berkeley.edu. Information on e-mail submission and additional guidelines for abstracts can be found at our web site (http://www.linguistics.berkeley.edu/BLS). We will not accept faxed abstracts. Registration Fees: Before February 5, 1999; $15 for students, $30 for non-students; After February 5, 1999; $20 for students, $35 for non-students. From dick at LINGUISTICS.UCL.AC.UK Fri Oct 23 15:52:33 1998 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Fri, 23 Oct 1998 16:52:33 +0100 Subject: Deacon Message-ID: I've just read Terrence Deacon's lovely book "The Symbolic Species", and wondered if anyone could point me to a review either by a linguist or by a supporter of Chomsky or Pinker's view of innateness. I'd also be interested in the views of anyone on this list. ============================================================================== Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From mackenzi at LET.VU.NL Mon Oct 26 14:32:47 1998 From: mackenzi at LET.VU.NL (J.L. Mackenzie) Date: Mon, 26 Oct 1998 14:32:47 MET Subject: Deacon Message-ID: Like Dick Hudson, I too have just read and enormously enjoyed Terrence Deacon's *The Symbolic Species* (Penguin, 1997). Not only is the book very well written, but it also is the most eloquent, erudite and effective debunking of the nativist position on the language abilities of homo sapiens that I have seen. He takes Peirce's distinction between icon, index and symbol and argues that, against the evolutionary odds, the ancestors of the human being 2m years ago acquired the power of symbolic thought, and claims that language developed as an evolutionary consequence. Against the nativist position that "the language faculty must be innate, otherwise no chuild could learn it", Deacon argues -- to me convincingly -- that language structure has a "kid-friendly logic", i.e. has evolved to be such that it can be readily acquired by children with their particular thought processes at their stage of mental development. Indeed, the human being has evolved to become a "savant of language and symbolic learning"; the genetic basis for symbol-learning abilities has become a fixation, i.e. a universal trait of the species. This book, with its claim that language and the brain have co-evolved, seems to me to offer an alternative view on the cognitive status of language that will be attractive and appealing to functionalists and will dovetail with our research findigs as functionalists. Not unusually, Deacon tends to equate linguists with nativists, but he is aware of the functionalist psycholinguistic tradition from Vygotsky to Bates. Dick Hudson asked for reviews. I've seen the following: http://www.nytimes.com/books/97/08/10/reviews/970810.10calvint.html (by William Calvin) and http://www.wam.umd.edu/~mturn/WWW/deacon.html (by Mark Turner) Lachlan Mackenzie Department of English Faculty of Letters Vrije Universiteit De Boelelaan 1105 1081 HV Amsterdam Netherlands tel: +31-20-444 6492 fax: +31-20-444 6500 home phone: +31-20-671 1491 e-mail: mackenzi at let.vu.nl From coulson at COGSCI.UCSD.EDU Tue Oct 27 01:22:05 1998 From: coulson at COGSCI.UCSD.EDU (Seana Coulson) Date: Mon, 26 Oct 1998 17:22:05 -0800 Subject: Deacon Message-ID: While cleaning my desk I turned up another review of Deacon's fantastic book (_The Symbolic Species_) besides the ones mentioned already on the list: Poeppel, David. 1997. Mind over chatter. Nature 388: 734. Poeppel received his Ph.D. from MIT's Department of Brain and Cognitive Science, and in 1997 anyway, was doing a neuroimaging post-doc at UCSF's Biomagnetic Imaging Laboratory. -Seana Coulson From jkyle at EAGLE.CC.UKANS.EDU Thu Oct 29 03:20:53 1998 From: jkyle at EAGLE.CC.UKANS.EDU (John Kyle) Date: Wed, 28 Oct 1998 21:20:53 -0600 Subject: Call for Papers (KWPL) Message-ID: Please Post ***********************Call for Papers************************ *****KANSAS WORKING PAPERS IN LINGUISTICS***** Number 1: General Linguistics Number 2: Studies in Native American Languages Deadline: January 31, 1999 The editors of Kansas Working Papers in Linguistics will produce two numbers of Volume 24, for 1999. We welcome submissions of papers on all topics in the field of linguistics and closely-related disciplines for Number 1. Papers dealing with native languages of the Americas will be selected for Number 2. Since we are a working paper, publication in KWPL does not preclude later publication elsewhere of revised versions of papers. Submissions should be in good readable form (double or 1.5 spaced), not necessarily final copies. Student papers are encouraged. Please include name, address, email address (if possible) when sending correspondence. Please send papers or inquiries to this address: Editors, KWPL Linguistics Department 427 Blake Hall University of Kansas Lawrence, Kansas 66045 e-mail: LGSA at kuhub.cc.ukans.edu ******************************* John Kyle, editor KWPL jkyle at ukans.edu From bralich at HAWAII.EDU Fri Oct 30 22:30:01 1998 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Fri, 30 Oct 1998 12:30:01 -1000 Subject: Ergo Talks with Microsoft Agents (Free Software) Message-ID: Microsoft has recently made some of its agent technology available on the web at http://www.microsoft.com/agents. Most well-known is a 3 D Parrot called "Peedy." Ergo Linguistics has just modified their patented "ChatterBox" technology to make it possible to speak with "Peedy" and the other agents. For those who are interested in viewing this talking desktop agent, we can provide the necessary files for a user that will set everything up and put the "Peedy" icon on the desktop. The "ChatterBox.exe" file will set up ChatterBox which will automatically allow you to speak to Peedy. Once you set up ChatterBox and the "Peedy" in this setup file Just type in sentences like the following and you can ask the corresponding questions. John gave mary a book because it was her birtbday did John give mary a book what did john give mary who gave mary a book who did john give a book why did john give mary a book the tall dark stranger is carrying a bloody knife what is the stranger doing what is the stranger carrying was the stranger carrying a knife you saw the tall dark stranger in the park where did you seen the stanger what did you see what did you see in the park thomas jefferson is the third president of the United States who is the third president of the United States The Yankees won the 1998 World series WHAT won the 1998 World Series *currently the program does not know that the "Yankees" are people so it is necessary to use "What" for this question. and so on. Of course you could build a variety of story or educational files to talk to Peedy about, but for this early version it is just fun to put in a few sentences and chat with him. This is also available with the Virtual Friend technology at http://www.haptek.com. Our web site is http://www.ergo-ling.com if you have any further interest in our NLP technology. Or... if you have a WIN95 animation of your own we would be happy to show you how to connect ChatterBox to it. I will be showing this in Boston at the SBIR National Conference November 3-5th. I will also be giving a lecture and demonstration of this technology at Northeastern University (Thursday at noon room 415 in the Classroom Building) while I am there. If you have anyone in town at that time or at that conference, ask them to stop by and I will give them a more thorough introduction to the ChatterBox technology and our other NLP tools. Because my company is an SBIR grantee we will have display space in the SBIR section near the main entrance. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)539-3924 bralich at hawaii.edu http://www.ergo-ling.com Philip A. Bralich, President Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 tel:(808)539-3920 fax:(880)539-3924 From jtang at COGSCI.BERKELEY.EDU Sat Oct 31 19:24:47 1998 From: jtang at COGSCI.BERKELEY.EDU (Joyce Tang Boyland) Date: Sat, 31 Oct 1998 11:24:47 -0800 Subject: storage / computation again Message-ID: This is a bit of a delayed response to Brian (I'm just coming off of grading mid-terms): I hope I did not imply that we should use a model of word learning as the basis for understanding the automatization of syntactic patterns. If anything, it should be the opposite, that we might use the automatization of syntactic patterns as the basis for understanding the formation of new words (like _brokeded_ which does have "slots" so to speak and transitions). Also, building off Martin Haspelmath's posting which muses about criteria to use in deciding whether storage or processing is superior, I think it's worth trying out a stronger claim, namely that neither storage+retrieval nor computation is intrinsically superior. In skill learning, you can start either with storage+retrival of an exemplar (or prototype), or with computation of a sequence. It depends on the details of the situation which route is used. But whichever way you learn to use a new construction, the outcome is not that you have added a new construction (or even a new word) to your piggy bank, but that you have smoothed a set of transitions so that such a sequence is more readily assembled and produced. I agree with Brian that it would be great to have more empirical psycholinguistic research on these topics, and that it is relevant both to acquisition and to historical change. Joyce Tang Boyland >> Date: Wed, 14 Oct 1998 17:34:12 -0600 >> Sender: FUNKNET -- Discussion of issues in Functional Linguistics >> From: Brian MacWhinney >> Subject: rote vs rules >> >> 3. I agree with Joyce that language is a skill. However, the devil is in >> the details. If we fail to recognize the fundamental difference between >> word learning and syntactic automatization, I am worried that we could go >> down some false paths. The routinization of the word is supported by a >> tightly predictive association between audition and articulation. When we >> hear a new auditory form, it appears that we use the phonological loop on >> some level to store it. As we then attempt to match to this form using our >> own articulations, we convert a resonant auditory form to an entrenched >> articulatory form. Work by Baddeley, Gupta, Cowan and others has taught us >> a great deal about the details of this process. Yes, you can use ACT-R to >> model this, but you will be using a restricted subset of ACT-R and the >> process of deciding what restrictive subset is applicable is the whole of >> the scientific process of understanding the neuronal mechanics of word >> learning. >> >> Trying to use a model of word learning as the basis for understanding the >> automatization of syntactic patterns strikes me as quite problematic. The >> central problem is that predicates have open slots for arguments. Words, >> as Wally notes, are largely slot-free (of course there are exception, such >> as infixes etc.). I tend to think of this level of skill automaticity in >> terms of Michael Jordan faking out Karl Malone in the last points of the >> final game of the NBA finals. Jordan clearly has a flexible set of plans >> for dunking the ball into the basket against the opposition of a defender. >> What is automatic in his actions is the move from one state to the next. >> The skill is in the transitions. It strikes me that sentence production is >> like this and that word level articulation is basically not. >> >> --Brian MacWhinney From STRECHTER at CSUCHICO.EDU Fri Oct 2 22:41:49 1998 From: STRECHTER at CSUCHICO.EDU (Trechter, Sara) Date: Fri, 2 Oct 1998 15:41:49 -0700 Subject: job announcements Message-ID: The English Department at California State University, Chico announces two-tenure track (Assistant Professor) positions. The position in linguistics requires a Ph.D. in linguistics with research/training in discourse and cognitive/functional approaches to syntax and semantics; experience in teaching core-area linguistics courses as well as introduction to Second Language Acquisition. A research interest in non-Indo-European languages(s) is desirable. Tenure-track faculty are required to pursue research and publication and provide service to the university community. The teaching load is 4 courses per semester, and teaching responsibilities will include introduction to linguistics, introduction to syntax, introduction to second language acquisition: theory and methods, and graduate seminars in linguistics (as needed). The applied linguistics/TESOL position requires a Ph.D. in applied linguistics or TESOL (with a strong linguistics background); teaching experience in English for Academic Purposes programs in the US and ESL in a non-US setting, or ESL/bilingual programs in K-12 schools in the US. The position also involves advising ESL students, pursuing research and publication, and service to the university community. The teaching load is 4 courses per semester, including ESL, introduction to second language acquisition theories and methods, and a graduate seminar in second language acquisition. As a university that educates students of various ethnic and cultural backgrounds, we value a diverse faculty and staff and seek to create as diverse a pool of candidates as possible. Starting date for both positions is August 1999. Salary ranges from 37,956-40,692. Deadline for applications is December 3, 1998 (and continue as necessary). Please mail letter of application, current CV, and recommendations to: Karen C. Hatch Department of English California State University, Chico Chico, CA 95929-0830 ****************** Sara Trechter strechter at csuchico.edu From erhard.voeltz at UNI-KOELN.DE Mon Oct 5 10:25:43 1998 From: erhard.voeltz at UNI-KOELN.DE (Erhard Voeltz) Date: Mon, 5 Oct 1998 12:25:43 +0200 Subject: Ideophone Symposium Message-ID: The Institut f?r Afrikanistik, Universit?t zu K?ln D-50923 Cologne, Germany/Allemagne wishes to announce the: International Symposium on Ideophones. January 25-27, 1999 F. K. Erhard Voeltz & Christa Kilian-Hatz, Conveners The symposium will convene at the: Arnold-Janssen-Haus Tagungsheim, Heimvolkshochschule Arnold-Janssen-Stra?e 24 D-53754 Sankt Augustin Preliminary Program Barry Alpher. Intonation, given and new, and ideophones in some indigenous languages of Australia. Yiwola Awoyale. Form-meaning interface in Yoruba ideophones. Amha Azeb. Ideophones in Wolaitta. Robert Blust. Is there a universal level of structure between the phoneme and the morpheme? Farida Cassimjee & Charles W. Kisseberth. The tonology of the ideophone in Emakhuwa. Tucker G. Childs. Ideophones, language variation, and language contact: Changes in Zulu. Denis Creissels. Ideophones as uninflected predicative lexemes in Setswana. Didier Demolin. Ideophones and sound symbolism in Hendo. G?rard Dumestre. L'int?gration des ?l?ments id?ophoniques dans la langue : le cas du bambara. Francis O. Egbokhare. Phono-semantic correspondence in Emai attributive ideophones. Stefan Elders. Defining ideophones in Mundang. Vesa Jarva. Expressive use of words of foreign origin. Christa Kilian-Hatz. Ideophone or not? Sataro Kita. Two-dimensional semantic analysis of Japanese ideophones. Merja Koistinen. Syntactic structure of expressive words: Colorative construction. Daniel P. Kunene. Speaking the act: The ideophone as a linguistic rebel. Omen N. Maduka-Durunze. Phonosemantic hierarchies. Ngandu-Myango Malasi. Comportement des id?ophones en lega (D. 25). William McGregor. Ideophones as the source of verbs in northern Australian languages Eve Mikone. Are there ideophones in the Balto-Finnic languages? Paul Newman. Are ideophones really as weird and extra-systematic as linguists make them out to be? Philip A. Noss. Ideas, phones and Gbaya verbal art. Janis Nuckolls. Sound symbolic grammar of Pataza Quechua. George Poulos & C. T. Msimang. The ideophone in Zulu--a reexamination of descriptive and conceptual notions. Paulette Roulon-Doko. Statut des id?ophones en gbaya, langue oubanguienne de Centrafrique. William J. Samarin. One tale, one narrator, three languages: Ngbandi, Sango, French. S. C. Satyo. Ideophones and ideograms in Xhosa. Ronald P. Schaefer. Ideophonic adverbs and manner gaps in Emai. Eva Schultze-Berndt. Traces of ideophones in complex predicates of Northern Australia. Tasso Okombe. La formation des radicaux verbaux d?id?ophoniques en tetela (dialecte ewango). Jess Tauber. Complementary distribution of distinct ?ideophone? classes in left- vs. Right-headed languages. F. K. Erhard Voeltz. The sound of silence: Contributions of African languages to our understanding of the ideophone. Richard L. Watson. A comparison of ideophones in Southeast Asia and Africa. For further information: Tel: 49.221.470.4741 Fax: 49.221.470.5158 e-mail: erhard.voeltz at uni-koeln.de e-mail: christa.kilian at uni-koeln.de From W.Croft at MAN.AC.UK Mon Oct 5 14:12:45 1998 From: W.Croft at MAN.AC.UK (Bill Croft) Date: Mon, 5 Oct 1998 15:12:45 +0100 Subject: Storage parsimony vs. computing parsimony Message-ID: Dear Funknetters, There is a strong methodological imperative in almost all formalist grammatical analyses, and in a fair number of functionalist analyses, towards what I call "storage parsimony". What I mean is that, in aiming towards "simplicity", "elegance", "[unmodified] parsimony", analyses are proposed with minimum redundancy, the fewest number of distinct underlying lexical forms or syntactic construction types---that is, the fewest items that have to be stored, in a lexicon, morphological paradigm, set of syntactic rules, phoneme segment inventory, etc. The result of this is that the analyses require a lot of computation, using derivational rules, linking rules, inheritance, filling in of unspecified information, etc.; plus the constraints that turn out to be required in order to make the computations happen in the right places but not in the wrong places. In other words, storage parsimony leads to computing complexity. A number of functionalist models eschew storage parsimony, opting instead for computing parsimony. These are the "usage-based" models, polysemy/radial category models, Bybee-style morphological models, construction grammar models which allow redundant specification of constructional information in the network, etc. The rationale behind such models is that human beings are a lot better at strorage and retrieval of information than at computational operations involving complex symbol-manipulation. Of course, computing parsimony, used as a way to judge analyses, is a methodological imperative as much as storage parsimony is; but at least it supposedly has the merit of being closer to what psychologists believe about how human beings' minds work. My question is: what IS the psychological or psycholinguistic evidence that people are better at storage/retrieval than at computation (of the symbol-manipulating sort that I alluded to)? I had this question posed to me when I was once defending the computing parsimony approach, and I didn't know of any references to make this point. Please give me specific references to the literature, if possible, since I would like to refer to these methodological approaches in a book I'm writing. And I would also like to know approximately how widely it is believed among psychologists that people are better at storage/retrieval than at symbol computation. Thanks very much, Bill Croft Dept of Linguistics Univ of Manchester w.croft at man.ac.uk From macw at CMU.EDU Mon Oct 5 17:20:55 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Mon, 5 Oct 1998 11:20:55 -0600 Subject: storage Message-ID: Bill, Yes, psychology is rich with evidence regarding the storage-retrieval issue. I suppose that one could easily locate 500 recent articles on this topic. Going to this literature for specific references to cite is fine, but perhaps you also want to get a lay of the land. Consider the issue of reading a new unknown word vs reading a known word. For the known word, we may do some phoneme-grapheme analysis, but we have a well-greased lexically specific route to the word's sound. The more frequent the word, the faster we read it, etc. My guess is that half of the papers in the area of word recognition report significant frequency effects. Consider a simple result from Stemberger and MacWhinney (1986, Memory and Cognition). We found that high frequency regular past tense forms (wanted) are more resistant to irregularizing speech errors than are low frequency regular past tense forms (panted). We believe that this is due to the fact that high frequency forms are more thoroughly stored for unitized rote retrieval. More broadly, psychologists tend to view this in terms of the power law of learning. Any process that is done repeatedly gets stronger according to a power function. Typically, this is understood as occuring through chunking. When lots of pieces tend to occur together frequently, the whole starts to functions as a single chunk. There is a lot of research and modeling literature on this. People at CMU like Anderson, Simon, Newell, Just, etc. have done lots on this. One twist on the power law emphasizes how learning of a given form leads to entrenchment. Joan Bybee's work certainly builds on such findings and elaborates them. Another way of viewing this is in terms of the strategy-selection model as developed by Reder, Siegler, and others, mostly for math, reasoning, etc.. When confronted with a given problem, we have to chose whether to retrieve or analyze. We usually apply a quick filter on the problem that decides which of these two approaches will be most useful and then go from there. For language, this framework might be useful for high level strategy choices in complex syntax and pragmatics. This is an enormous area. Reading the recent work in this area is somewhat complicated by the fact that current models emphasize connectionist modeling which tends to distract from the issue you are asking. For this reason, you might find textbooks from the early 80s or even 70s clearer on these issues than some recent textbooks. However, if you stick with models like John Anderson's ACT-R model as reported in his recent books and textbooks, the role of frequency is clearly highlighted. By the way, none of these remarks have anything to do with the really really complex symbol manipulation models linguists often propose. Instead, for psycholinguists, the dichotomy is usually between extremely trivial rules like "add -ed" or "ph sounds like f". We pretty much discarded any belief in the psychological reality of really complex formal linguistic rules in the 1970s. This is not to say that psycholinguists are not interested in abstract categories. Some are still playing around with traces, universal parts of speech, abstract syntactic structure, and the like, but this work seldom gets grounded on really complex and abstract rule systems. --Brian MacWhinney From bralich at HAWAII.EDU Wed Oct 7 02:32:30 1998 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Tue, 6 Oct 1998 16:32:30 -1000 Subject: Patent Translator/Syntacticians needed at Ergo Message-ID: Our U.S. patent on our NLP technology was allowed just several months ago and we now need to find translations of the patent document for international patents. The patent is 114 pages in length (including drawings and abstract), and it must be translated into the major languages of the world for International Patents in the major countries of the world. Please send bids along with a resume which includes experience with theoretical syntax and patents. Be sure and include an estimate of the time required to complete the translation and a track record of completed translations. Necessary languages will include French, Spanish, Russian, Arabic, Chinese (of the three districts), Japanese, Korean, German, Thai, etc. Please send bids and resumes to the address below. Bids from organizations which can handle more than one of the required langauges would be welcomed. You may preview our software on our web site at http://www.ergo-ling.com. You can also download some of our products there as well. A copy of the patent will be provided to those who submit acceptable bids and resumes. The job will begin in late 1998 or early 1999. Phil Bralich Philip A. Bralich, President Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 tel:(808)539-3920 fax:(880)539-3924 From W.Schulze at LRZ.UNI-MUENCHEN.DE Wed Oct 7 08:50:15 1998 From: W.Schulze at LRZ.UNI-MUENCHEN.DE (Wolfgang Schulze) Date: Wed, 7 Oct 1998 10:50:15 +0200 Subject: Storage parsimony vs. computing parsimony Message-ID: Dear Bill, I cannot go beyond what Brian WcWhinney told you (esp. because I'm a linguist and not a (trained) psychologist). But I think recent work on linguistic constructivism (or vice versa) will provide you with some relevant arguments. I myself have developed some theoretical proposals concerning the whole question. They are part of my language/grammar theory approach "Grammar of Scenes and Scenarios" (GSS) that is (fragmentarilly) documented in my book "Person-Klasse-Kongruenz. Fragmente einer Kategorialtypologie des einfachen Satzes in den ostkaukasischen Sprachen" (vol 1 (in two parts) "Die Grundlagen" (to appear next week at LINCOM (Munich). Don't worry about "East Caucasian": That's "only" the field of evaluation for GSS that I have chosen. Vol.1 is basically theoretical.- Also see what the NTL people do in Berkeley. Prof. Dr. Wolfgang SCHULZE Institut f?r Allgemeine und Indogermanische Sprachwissenschaft * Universit?t M?nchen Geschwister-Scholl-Platz 1 * D-80539 M?nchen Tel.: +89-2180 2486 http://www.lrz-muenchen.de/~wschulze/ From cumming at HUMANITAS.UCSB.EDU Wed Oct 7 21:24:42 1998 From: cumming at HUMANITAS.UCSB.EDU (Susanna Cumming) Date: Wed, 7 Oct 1998 14:24:42 -0700 Subject: Phonetics/Phonology job -- UCSB Message-ID: UNIVERSITY OF CALIFORNIA SANTA BARBARA DEPARTMENT OF LINGUISTICS FACULTY POSITION IN PHONETICS OR PHONOLOGY The Linguistics Department at the University of California, Santa Barbara, seeks a linguist for an Assistant Professor, tenure-track position in phonetics or phonology, to take effect July 1, 1999. Ability to teach courses in both fields is preferred. Active research on a variety of languages, and in one or more of the following areas, will be considered a plus: competing theories in phonology or phonetics; phonetics-phonology interactions; natural discourse prosody; instrumental phonetic analysis; cross-linguistic, typological, developmental, or historical-comparative studies of phonological-phonetic systems. Ph.D. required at time of appointment. Applications must include a cover letter, curriculum vitae, and representative publications. Candidates should arrange to have three letters of recommendation sent by application date. Preliminary interviews will be held in January, 1999 at the LSA meeting in Los Angeles. For full consideration, applications must be received by December 11. Address inquiries and applications to: Search Committee Dept. of Linguistics UC Santa Barbara Santa Barbara, CA 93106 e-mail: Lingsearch at humanitas.ucsb.edu Fax (805) 893-7769 UCSB is an equal opportunity/affirmative action employer. Women and minorities are encouraged to apply. From pip at HN.PL.NET Thu Oct 8 02:25:23 1998 From: pip at HN.PL.NET (Philippa Nicoll) Date: Thu, 8 Oct 1998 15:25:23 +1300 Subject: Patent Translator/Syntacticians needed at Ergo In-Reply-To: <2.2.16.19981006162542.0ef7bf46@mana.htdc.org> Message-ID: At 04:32 PM 10/6/98 -1000, Philip A. Bralich, Ph.D. wrote: >Our U.S. patent on our NLP technology was allowed just >several months ago and we now need to find translations of the >patent document for international patents. The patent is 114 >pages in length (including drawings and abstract), and it must >be translated into the major languages of the world for >International Patents in the major countries of the world. >Please send bids along with a resume which includes experience >with theoretical syntax and patents. Be sure and include an >estimate of the time required to complete the translation and a >track record of completed translations. Necessary languages will >include French, Spanish, Russian, Arabic, Chinese (of the three >districts), Japanese, Korean, German, Thai, etc. Please send >bids and resumes to the address below. Bids from organizations >which can handle more than one of the required langauges would be >welcomed. You may preview our software on our web site at >http://www.ergo-ling.com. You can also download some of our >products there as well. A copy of the patent will be provided to >those who submit acceptable bids and resumes. > >The job will begin in late 1998 or early 1999. > >Phil Bralich > > >Philip A. Bralich, President >Ergo Linguistic Technologies >2800 Woodlawn Drive, Suite 175 >Honolulu, HI 96822 >tel:(808)539-3920 >fax:(880)539-3924 THIS EMAIL IS NO LONGER WITH THE PEOPLE YOU THINK IT IS... sorry.. From pip at HN.PL.NET Thu Oct 8 02:25:29 1998 From: pip at HN.PL.NET (Philippa Nicoll) Date: Thu, 8 Oct 1998 15:25:29 +1300 Subject: Storage parsimony vs. computing parsimony In-Reply-To: <361B2B47.5741@mail.lrz-muenchen.de> Message-ID: At 10:50 AM 10/7/98 +0200, Wolfgang Schulze wrote: >Dear Bill, > >I cannot go beyond what Brian WcWhinney told you (esp. because I'm a >linguist and not a (trained) psychologist). But I think recent work on >linguistic constructivism (or vice versa) will provide you with some >relevant arguments. I myself have developed some theoretical proposals >concerning the whole question. They are part of my language/grammar >theory approach "Grammar of Scenes and Scenarios" (GSS) that is >(fragmentarilly) documented in my book "Person-Klasse-Kongruenz. >Fragmente einer Kategorialtypologie des einfachen Satzes in den >ostkaukasischen Sprachen" (vol 1 (in two parts) "Die Grundlagen" (to >appear next week at LINCOM (Munich). Don't worry about "East Caucasian": >That's "only" the field of evaluation for GSS that I have chosen. Vol.1 >is basically theoretical.- Also see what the NTL people do in Berkeley. > >Prof. Dr. Wolfgang SCHULZE >Institut f?r Allgemeine und Indogermanische >Sprachwissenschaft * Universit?t M?nchen >Geschwister-Scholl-Platz 1 * D-80539 M?nchen >Tel.: +89-2180 2486 >http://www.lrz-muenchen.de/~wschulze/ THIS EMAIL IS NO LONGER WITH THE PEOPLE YOU THINK IT IS... sorry.. From pip at HN.PL.NET Thu Oct 8 02:25:42 1998 From: pip at HN.PL.NET (Philippa Nicoll) Date: Thu, 8 Oct 1998 15:25:42 +1300 Subject: Phonetics/Phonology job -- UCSB In-Reply-To: Message-ID: At 02:24 PM 10/7/98 -0700, Susanna Cumming wrote: >UNIVERSITY OF CALIFORNIA SANTA BARBARA >DEPARTMENT OF LINGUISTICS >FACULTY POSITION >IN >PHONETICS OR PHONOLOGY > >The Linguistics Department at the University of California, Santa Barbara, >seeks a linguist for an Assistant Professor, tenure-track position in >phonetics or phonology, to take effect July 1, 1999. Ability to teach >courses in both fields is preferred. Active research on a variety of >languages, and in one or more of the following areas, will be considered a >plus: competing theories in phonology or phonetics; phonetics-phonology >interactions; natural discourse prosody; instrumental phonetic analysis; >cross-linguistic, typological, developmental, or historical-comparative >studies of phonological-phonetic systems. Ph.D. required at time of >appointment. > >Applications must include a cover letter, curriculum vitae, and >representative publications. Candidates should arrange to have three >letters of recommendation sent by application date. Preliminary interviews >will be held in January, 1999 at the LSA meeting in Los Angeles. For full >consideration, applications must be received by December 11. > >Address inquiries and applications to: > >Search Committee >Dept. of Linguistics >UC Santa Barbara >Santa Barbara, CA 93106 >e-mail: Lingsearch at humanitas.ucsb.edu >Fax (805) 893-7769 > >UCSB is an equal opportunity/affirmative action employer. Women and >minorities are encouraged to apply. THIS EMAIL IS NO LONGER WITH THE PEOPLE YOU THINK IT IS... sorry.. From spikeg at OWLNET.RICE.EDU Thu Oct 8 18:30:24 1998 From: spikeg at OWLNET.RICE.EDU (Spike Gildea) Date: Thu, 8 Oct 1998 13:30:24 -0500 Subject: Job Announcement (fwd) Message-ID: HEBREW TEACHING POSITION AT THE UNIVERSITY OF OREGON The University of Oregon seeks a full-time Instructor in modern Hebrew, beginning Fall, 1999. Applicants should have native or near-native competence, MA or better in Hebrew or field related to language learning and teaching, experience and skill in college-level language teaching. Applications should include cv, at least 3 letters of reference, evidence of excellence in teaching including student evaluations, syllabi, related teaching materials, and description of instructional methods, goals, and background for teaching Hebrew. Duties will include first- and second-year modern Hebrew, and other courses to be determined on the basis of the appointee's background and the program's needs. Send applications to Professor Richard L. Stein Hebrew Language Search Committee English Department University of Oregon Eugene, or 97403-1294 Phone: (541) 346-3971 FAX to (541) 346-2220. e-mail: rstein at oregon.uoregon.edu Please refer in your inquiry to to HEBREW SEARCH. Review of applications will begin January 1, 1999. The University of Oregon is an equal- opportunity, affirmative action institution committed to cultural diversity and compliance with the Americans with Diasabilities Act. From chafe at HUMANITAS.UCSB.EDU Sat Oct 10 16:35:07 1998 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Sat, 10 Oct 1998 09:35:07 -0700 Subject: Storage parsimony vs. computing parsimony In-Reply-To: Message-ID: Dear Bill, I don't have any very complete answer to your question, but, for one thing, I think it is well established that children learn huge amounts of vocabulary over a relatively short period of time. Even Pinker mentioned this in The Language Instinct, and I think he provided a reference or two. Furthermore, in working with a couple polysynthetic languages over many years, it has become quite clear to me that people learn huge numbers of those long words by rote, often relating them to the particular situations where they first heard them. To a large extent they to not CONSTRUCT them according to some system a linguist might suppose they use. They ARE able to come up with neologisms from time to time, but more by analogy, and by applying some simplified patterns quite different from what linguists come up with. This is a question I'm much interested in too, and I'll also be happy to hear of research in this direction. Wally Chafe From tomas at EVA.MPG.DE Sun Oct 11 15:54:44 1998 From: tomas at EVA.MPG.DE (tomas) Date: Sun, 11 Oct 1998 10:54:44 -0500 Subject: Croft's Question Message-ID: I posed Bill Croft's question to my colleague Larry Barsalou and below is the very informative answer I got. Mike Tomasello =============== Mike (and Bill), This issue has been at the heart of the exemplar-prototype debate in the categorization literature. Whereas prototypes require a lot of computation during learning (to abstract prototypes), they require little computation during transer (matching to a single prototype for a category). The result is parsimonious storage (a single prototype). In contrast, exemplar models have very simple computation during learning (the simple recording of an exemplar), but this results in complex storage (many exemplars for a category) and complex transfer computations (matching the entity to be categorized to a subset or all exemplars). Surprisingly, perhaps, the evidence overwhelmingly supports exemplar models. This suggests that the human cognitive system is not very good at abstraction, so it opts for simple learning computations (much work on concept learning further indicates how bad people are at extracting rules). This further suggests that human storage and retrieval are powerful, given the human cognitive system seems capable of storing much detailed information and retrieving it during categorization (as well as matching it to the item to be categorized). I'm embarrassed to say that the paper that probably does the best job of discussing these tradeoffs is a paper of my own: Barsalou, L.W., & Hale, C.R. (1993). Components of conceptual representation: From feature lists to recursive frames. In I. Van Mechelen, J. Hampton, R. Michalski, & P. Theuns (Eds.), Categories and concepts: Theoretical views and inductive data analysis (97-144). San Diego, CA: Academic Press. If you or Bill would like a copy, I'd be glad to send it to you. It goes into considerable detail about exemplar, prototype, and connectionist models on these issues, specifically discussing the costs of storage vs. computation for each type of model. A related debate exists in problem solving. Originally, everyone believed that the human cognitive system pieces together rules or productions to produce solutions to problems (inexpensive storage of a few widely applicable rules, and expensive computations of chaining them together while searching a search space). Now, few people believe that this characterizes the bulk of human problem solving. Instead, people appear to store cases (i.e., exemplars) and do case-based reasoning (much like exemplar-based categorization). The people who have done the most work on this are Brian Ross (Psych), Keith Holyoak (Psych), Janet Kolodner (AI), Kris Hammond (AI). I'd be glad to send references if you like, but their papers are widely available. Finally, this tradeoff between computation and storage is manifest in many current theories of skill. Essentially, novices are viewed as having stored few cases, and so have to compute, whereas experts are viewed as having stored many cases, and so don't have to compute (they just retrieve). This goes back to Chase and Simon's work on chunking and chess, and it can be found in the modern theories of John Anderson (ACT*), Gordon Logan (exemplar-based skill model), and Alan Newell (SOAR). Each includes two ways of producing a behavior--computation vs. retrieval--and assumes that novices mostly compute but increasinly retrieve as they become expert. Again, I'd be happy to send references if you have trouble locating them. As you can see, the storage/computation distinction has been central in psychology for a long time, and it consistently tends to suggest that the human cognitive system capitalizes tremendously on complex storage. I suspect that the distinction is relevant elsewhere as well. I hope that this is helpful. If you have any questions or want to discuss anything further, please let me know. L P.S. Mike, I don't have Bill's email address, so if you want him to see this message, please forward it to him. Thanks. From kemmer at RUF.RICE.EDU Tue Oct 13 04:14:41 1998 From: kemmer at RUF.RICE.EDU (Suzanne E Kemmer) Date: Mon, 12 Oct 1998 23:14:41 -0500 Subject: storage and computation Message-ID: Wally's mention of the acquisition perspective, and the fact that I just heard a lot about acquisition at the CSDL conference, made me think about the relevance to Bill's question of Mike Tomasello's 'verb island' and related work. To elaborate on Wally's point: Kids not only learn huge numbers of words very quickly, but they're very slow to learn general constructional patterns. The clause-level syntactic patterns they use are for a long time tied very tightly to individual verbs; constructions don't just come into their grammar across the board (Tomasello's result). If humans were made to prefer computational processing of constructions over storage, you'd think that general constructional patterns would be easier and earlier acquired, and storage of large numbers of lexical and syntactic units would come later. Of course, child language is just the extreme case of the connection between lexical items and syntactic constructions; constructions stay lexically tied to some degree all the speaker's life. If syntactic patterns were really the result of on-line composition of syntactic categories via rules, independently of lexical items, then we would expect no particular connection of specific syntactic constructions to stored lexical units. Nor would we expect the huge number of collocations you get in language. (Why should lexical bits in one part of the 'tree' affect other bits? Which they do, well beyond subcategorization.) Given that we do have all these fixed and semi-fixed phrases, people should presumably prefer to just compute these as needed, but instead these elements show evidence of being stored (e.g. phonological reduction). If composition were easier than storage, then the most FREQUENT stuff should be processed via composition rather than just stored whole. I assume that psychological results show us the opposite. (Too bad the Utrecht workshop on Storage and Computation is not likely to hear much from psychologists, or cognitive linguists...) --Suzanne Kemmer From john at RESEARCH.HAIFA.AC.IL Tue Oct 13 13:31:49 1998 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Tue, 13 Oct 1998 15:31:49 +0200 Subject: storage and computation Message-ID: I've been quite interested to read here about specific findings regarding how people learn language. As I have been observing the linguistic development of my now 6-year-old daughter, I have been coming to the conclusion that anyone who thinks that children develop their language by relying more on rule generalization than on retrival either doesn't have children or doesn't pay any attention to what they say. I have repeatedly attempted to use Hebrew morphology playfully to make jokes with Shayna and I have repeatedly been disappointed that she just doesn't get it, even though she can use the morphological forms she's already encountered perfectly. I have struggled mightily to explain to her the subregularities in the Hebrew morphological system, but this has no effect whatsoever. She transparently has no clue of the morphological structure of the agglutinative Japanese forms she uses regularly. Unfortunately for linguistics, the formalists whose worldview would most benefit from the findings reported in some of the recent postings here aren't interested in experiments which take them out of their armchairs. John Myhill From druuskan at CC.HELSINKI.FI Tue Oct 13 15:07:06 1998 From: druuskan at CC.HELSINKI.FI (Deborah D K Ruuskanen) Date: Tue, 13 Oct 1998 18:07:06 +0300 Subject: Storage and computation Message-ID: While wishing to second all that has been said regarding cross-disciplinary work, I think it should be pointed out that the field of translation may have been overlooked as a means of checking theories of retrieval. Machine translation has tried to use every larger memories to store and retrieve translations once made and match them against translations to be made: this method simply does not work if the translation to be done is not an *exact* match. So much for retrieval. However, if it *is* an exact match, then retrieval saves mucho time for translators, who can then concentrate on *new* translation. I think children retrieve things in a similar way: if I used this before and it worked in this situation, I'll try it again. If it doesn't work, the child has to come up with something *new* and try to figure out *why* it didn't work. My theory is that translation works by elimination (through the application of context) rather than scanning of everything in the memory. This may be another way of saying that we first find *where* we stored something, and then *retrieve* it. And how many NLP and MT (machine translation) people are reading this list? DKR -- Deborah D. Kela Ruuskanen \ You cannot teach a Man anything, Leankuja 1, FIN-01420 Vantaa \ you can only help him find it druuskan at cc.helsinki.fi \ within himself. Galileo From TWRIGHT at ACCDVM.ACCD.EDU Tue Oct 13 16:34:10 1998 From: TWRIGHT at ACCDVM.ACCD.EDU (Tony A. Wright) Date: Tue, 13 Oct 1998 11:34:10 CDT Subject: storage and computation Message-ID: John Myhill wrote: > I've been quite interested to read here about specific findings regarding how > people learn language. As I have been observing the linguistic development > of my now 6-year-old daughter, I have been coming to the conclusion that > anyone who thinks that children develop their language by relying more on > rule generalization than on retrival either doesn't have children or > doesn't pay any attention to what they say. I have spent plenty of time with children who convince me that they rely a great deal on both rule generalization AND retrieval. Children notoriously regularize high-occurrence irregular verbs, i.e., "goed" for "went". Why would they do this if they were primarily retrieving? Why do children exhibit developmental patterns in syntax that are nothing like adult speech? Are their incipient retrieval capacities too limited and increase with age? --Tony Wright From Joe.Fullmer at 1790.COM Tue Oct 13 18:24:58 1998 From: Joe.Fullmer at 1790.COM (Fullmer, Joseph) Date: Tue, 13 Oct 1998 12:24:58 -0600 Subject: storage and computation In-Reply-To: <19981013.113410.TWRIGHT@ACCD.EDU> Message-ID: Along the lines of "goed" for "went", the other day my little niece was playing with a top, and her little brother wanted a turn. So her mother (my sister) told her she had three more tries. After she 'did it' once, I asked her how many turns she had left. She replied, "This is my two-th turn and after that I have one more." She used rule generalization to generate "two-th" as the ordinal for "two". I wasn't sure how to elicit whether she would say "One-th" for "first" or if she would 'retrieve' "first". After her second turn, I asked her what turn she was on trying to elicit whether she would get "three-th" for "third", but she answered "it's my last turn". Because the higher ordinal forms are more iconic (similarity in form), and the iconic relationship repeats several times, a rule is easily generated. On the other hand, the relationship of "one" to "first" and "two" to "second" is not apparent from the forms, and must be 'stored and retrieved' to get it right. Looks like rule-generalization is winning out here. (although in order to even generalize a rule in the first place, retrieval must be used.) Also of interest is that most languages follow this pattern of one and two having constructive relationships with their ordinals, while the higher ones have a conformative relationship. For a great explanation of this, see John S. Robertson: The History of Tense / Aspect / Mood / Voice in the Mayan Verbal Complex. He analogously relates C.S. Peirce's Icon, Index, and Symbol to Confomative, Reciprocal, and Constructive. Less marked categories of 2^m grammatical paradigms tend to have more constructive relationships (must use storage and retrieval), whereas more marked categories tend to have conformative relationships (able to use rule-generalizations). So, if we place "go" / "gyrate" in a 2^m paradigm with the present and simple past, we get go / went (*constructive relationship) gyrate / gyrated (*conformative relationship) Because gyrate / is much more highly marked (much more narrow meaning and application) we would expect it to have more conformative relationships than the less-marked go / went, and this is just what we see. According to the law of inverse proportionality (more breadth = less depth, less breadth = more depth) less-marked forms will have a much greater external manifestation (e.g. frequency of occurence, range of reference, and even length of words in many cases). There is also a law of direct proportionality, which bears on the topic, but i think i am growing long-winded, so will stop. But I would highly recommend the first two chapters (only 45 pages) of Robertson's book for all linguists (no knowledge of Mayan languages required) and the entire book for Mayanists. In any case, I believe the assesment that children rely a great deal on both rule-generalization and storage-and-retrieval is correct. With less-marked forms, there will be a need for more storage and retrieval, since relationships among forms will be more constructive, whereas with more-marked forms, rule-generalization will be more heavily relied on, since form relationships will be more conformative. In the case of "goed" instead of "went" and "two-th" instead of "second" it would seem to me that a child is actually in the experimentation stage, such that he/she is trying to determine whether the form is conformative or constructive. So, the child applies the expected conformative rule, and depending on the response (correction or acceptance) the form is reinforced as to which type it is, and how to appropriately deal with it (should I store this form for remembering later, or will the rule do?) > -----Original Message----- > From: FUNKNET -- Discussion of issues in Functional Linguistics > [mailto:FUNKNET at LISTSERV.RICE.EDU]On Behalf Of Tony A. Wright > Sent: Tuesday, October 13, 1998 10:34 AM > To: FUNKNET at LISTSERV.RICE.EDU > Subject: Re: storage and computation > > > John Myhill wrote: > > > I've been quite interested to read here about specific findings > regarding how > > people learn language. As I have been observing the linguistic > development > > of my now 6-year-old daughter, I have been coming to the conclusion that > > anyone who thinks that children develop their language by > relying more on > > rule generalization than on retrival either doesn't have children or > > doesn't pay any attention to what they say. > > I have spent plenty of time with children who convince me that they rely > a great deal on both rule generalization AND retrieval. > > Children notoriously regularize high-occurrence irregular verbs, > i.e., "goed" > for "went". Why would they do this if they were primarily retrieving? > Why do children exhibit developmental patterns in syntax that are > nothing like > adult speech? Are their incipient retrieval capacities too limited and > increase with age? > > --Tony Wright > From ward at PG-13.LING.NWU.EDU Tue Oct 13 20:44:32 1998 From: ward at PG-13.LING.NWU.EDU (Gregory Ward) Date: Tue, 13 Oct 1998 15:44:32 CDT Subject: storage and computation In-Reply-To: <000001bdf6d6$c8cf8780$5edcbb80@chssaddct.rn.byu.edu>; from "Fullmer, Joseph" at Oct 13, 98 12:24 pm Message-ID: > In any case, I believe the assesment that children rely a great deal on both > rule-generalization and storage-and-retrieval is correct. With less-marked > forms, there will be a need for more storage and retrieval, since > relationships among forms will be more constructive, whereas with > more-marked forms, rule-generalization will be more heavily relied on, since > form relationships will be more conformative. In the case of "goed" instead > of "went" and "two-th" instead of "second" it would seem to me that a child > is actually in the experimentation stage, such that he/she is trying to > determine whether the form is conformative or constructive. So, the child > applies the expected conformative rule, and depending on the response > (correction or acceptance) the form is reinforced as to which type it is, > and how to appropriately deal with it (should I store this form for > remembering later, or will the rule do?) Speaking of ordinal/cardinal forms, Richard Sproat and I noticed (Lg 67) that in cases like "This is the second time in as many weeks", it is the semantically transparent (and well-instantiated) relationship between ordinals and cardinals (irrespective of their surface realizations) that allows the apparent mismatch. Gregory -- Gregory Ward Department of Linguistics Northwestern University 2016 Sheridan Road Evanston IL 60208-4090 e-mail: gw at nwu.edu tel: 847-491-8055 fax: 847-491-3770 www: http://www.ling.nwu.edu/~ward From jkaplan at MAIL.SDSU.EDU Tue Oct 13 15:39:21 1998 From: jkaplan at MAIL.SDSU.EDU (Jeffrey P. Kaplan) Date: Tue, 13 Oct 1998 16:39:21 +0100 Subject: storage vs. computation Message-ID: A few years ago my then-3- or 4-year old spontaneously remarked that a three-tined fork was not a fork but actually a "threek." Under elicitation, he proceeded to predict a [tuk] and a [w^nk] ("onek") (with velar nasal). Jeff Kaplan Jeffrey P. Kaplan Linguistics San Diego State University San Diego, CA 92182-7727 619-594-5879 fax 619-594-4877 http://www-rohan.sdsu.edu/faculty/jeff315/index.html From jrubba at POLYMAIL.CPUNIX.CALPOLY.EDU Tue Oct 13 23:33:04 1998 From: jrubba at POLYMAIL.CPUNIX.CALPOLY.EDU (Johanna Rubba) Date: Tue, 13 Oct 1998 16:33:04 -0700 Subject: sound symbolism and features Message-ID: Hi, everybody. I've been following the computation/storage discussion with interest. I have a question in a different area. I'm teaching a grad intro ling course for people interested mainly in literature, and I do a lot of ling. analysis of lit. in this class (which is loads of fun, by the way!) We're just finishing our unit on phonology and I've been cruising around the web for stuff on sound symbolism. Maybe some of you know of some sources on a very specific area I am interested in: the correlation of particular _distinctive features_ with properties in other sensory domains (e.g. of +continuant with 'smooth' or 'velarized' with 'dark'). I know that _segments_ have received lots of attention, and I've seen some initial signs of work with features on commercial websites (creators of corporate names and brand names). Does anyone know of work that seeks empirical confirmation of cross-modal associations for particular _features_ rather than segments (by, for example, manipulating feature makeup of sounds/words and surveying scientifically-sound subject pools for consistency of association)? Work being done across cultures would, of course, be really interesting. Thanks for any leads you can offer! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Johanna Rubba Assistant Professor, Linguistics ~ English Department, California Polytechnic State University ~ San Luis Obispo, CA 93407 ~ Tel. (805)-756-2184 Fax: (805)-756-6374 ~ E-mail: jrubba at polymail.calpoly.edu ~ Home page: http://www.calpoly.edu/~jrubba ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From klebaum at UCLA.EDU Wed Oct 14 01:20:59 1998 From: klebaum at UCLA.EDU (PAMELA PRICE KLEBAUM) Date: Tue, 13 Oct 1998 18:20:59 -0700 Subject: storage and computation In-Reply-To: Message-ID: I think you are wrong. The investigation of child language acquisition is looking at empirical data as a means to further the inquiry into what the nature of the formal grammar is, or how it is acquired. Pamela Price Klebaum On Tue, 13 Oct 1998, John Myhill wrote: > I've been quite interested to read here about specific findings regarding > how people learn language. As I have been observing the linguistic development > of my now 6-year-old daughter, I have been coming to the conclusion that > anyone who thinks that children develop their language by relying more on > rule generalization than on retrival either doesn't have children or > doesn't pay any attention to what they say. I have repeatedly attempted to > use Hebrew morphology playfully to make jokes with Shayna and I have > repeatedly been disappointed that she just doesn't get it, even though she > can use the morphological forms she's already encountered perfectly. I have > struggled mightily to explain to her the subregularities in the Hebrew > morphological system, but this has no effect whatsoever. She transparently > has no clue of the morphological structure of the agglutinative Japanese > forms she uses regularly. > Unfortunately for linguistics, the formalists whose worldview would most > benefit from the findings reported in some of the recent postings here > aren't interested > in experiments which take them out of their armchairs. > John Myhill > From dick at LINGUISTICS.UCL.AC.UK Wed Oct 14 08:42:55 1998 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Wed, 14 Oct 1998 09:42:55 +0100 Subject: storage vs computation Message-ID: The debate so far assumes that all children follow the same strategic balance between storage and computation, but it's at least possible that different children have different strategies. E.g. in initial literacy, some thrive on phonics (computation) while others do better on whole-word learning (storage). I suspect this doesn't resolve the debate - I have the impression from my amateurish reading that *all* children hit the overgeneralisation stage (where "goed" replaces "went"), and maybe hit it at the same age (relative to other things such as vocab size). I'd be interested to hear from someone who knows about individual variation. ============================================================================== Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From gthomson at GPU.SRV.UALBERTA.CA Wed Oct 14 07:07:35 1998 From: gthomson at GPU.SRV.UALBERTA.CA (Greg Thomson) Date: Wed, 14 Oct 1998 10:07:35 +0300 Subject: storage and computation In-Reply-To: <19981013.113410.TWRIGHT@ACCD.EDU> Message-ID: At 11:34 -0500 10-13-1998, Tony A. Wright wrote: >John Myhill wrote: >Children notoriously regularize high-occurrence irregular verbs, i.e., "goed" >for "went". On the other hand, the simple, old-fashioned idea of "regularization", replacing an exception with a form based on the application of a regular rule, doesn't predict attested forms of the sort: _It got brokeded_ (meaning "It got broken"). You probably wouldn't want to say that the child uttering _brokeded_ has added an uninflected root _broked_ to her lexicon to which she is now adding the suffix -ed. (I won't vouch for the exact example, but I've seen things like that--maybe it was "tookeded".) This really looks more like the result of the convergence on a particular form in response to various pressures in the system. This could be called "computation" but may also hint that the dichotomy of storage vs. computation is already a bit off track. Greg Thomson From dick at LINGUISTICS.UCL.AC.UK Wed Oct 14 10:07:09 1998 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Wed, 14 Oct 1998 11:07:09 +0100 Subject: storage versus computation Message-ID: A message from Liz Bates, for which I'm just acting as postman: I won't be able to send this reply to the funknet, because I'm in Rome for the month and they won't let me "in" from this foreign site, believe it or not! By I have two quick reactions to your note: 1) There is no such thing as an overgeneralization stage, where "goed" replaces "went". This is one point on which Pinker and I agree, because the data for English are crystal clear no matter what horse you are betting on: children typically go from of error-free production of a handful of (probably rote?) high freuency irregular forms, to a long long phase of OCCASIONAL overgeneralizations. But "goed" always coexists with "went" in every child who has ever been studied. The highest rate of over-generalization on public record was by Stan Kucjac's son Abe (I'm probably spelling Stan's name wrong, again, by the way), who reached a high of 50% overgeneralization. Most kids are in the 10-17% range. By the way, my daughter was rather peculiar: she starting produces an interchangeable array of overgeneralizations vs. correct regular past tense forms without ever passing through a rote irregular phase. That is, there was no "went" before "goed". Just an unmarked "go" until such a time as the past tense started to be marked, whereupon the vacillation began. 2) In all of this discussion of storage vs. rules (a.k.a. rote vs. rules) people seem to be unaware of the third possibility: analogy. A single device based on analogy (generalization from partial similarity) can give you both the rote-looking behaviors and the rule-like overgeneralizations that are assumed by two-mechanism theories. This is, of course, the basis of connectionist claims, since networks are essentially analogy-making device that operate across distributed representations. Whether or not children are neural nets is another question, but it is important to at least be open to the LOGICAL possibility of a third solution that is neither rote nor rules. Pinker has indirectly acknowledged this in the most recent version of his theory: he still insists on two mechanisms, and one of them makes rules without regard for internal similarity or freuency, but the other one is an analogy-making devices. That's required in order to account for 'irregularizations' (e.g. children who make "pat" the past tense of "put" and so on, generalizing from an irregular form). In this regard, it is important to note that any of three different sources of similarity are sufficient to support novel generalizations in an analogy-making device: (1) similarity in physical form (e.g. wug --> rug --> rugs --> wugs), (2) similarity of meaning (e.g. wug --> little animal --> little animals --> wugs), or (3) common fate or similarity in contexts of occurrence (e.g. "wug" appears in a discourse slot that seems to be occupied by a class of items that a LINGUIST would call "nouns", so do the nouniest thing with them....). There are existing simulations showing that any of these three sources of similarity can give rise to novel overgeneralizations in a neural network. If you think this is helpful, feel free to pass it on to Funknet, but I'm happy to stick with a private interchange too. -liz ============================================================================== Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From lmenn at CLIPR.COLORADO.EDU Wed Oct 14 15:27:12 1998 From: lmenn at CLIPR.COLORADO.EDU (Lise Menn) Date: Wed, 14 Oct 1998 09:27:12 -0600 Subject: storage/computation Message-ID: Menn & MacWhinney's 1984 article on the 'repeated morph constraint' (Language 60:3, 519-541) argues for both storage and on-line computation of bound grammatical morphemes; the article 'Structure and use in the acquisition of word formation' by Eve Clark and Ruth Berman, in the same issue, is relevant to interpreting Myhill's observations on Hebrew. Lise Menn Lise Menn Professor and Chair Linguistics Department - Box 295 University of Colorado Boulder CO 80309-0295 303-492-8042; fax 303-492-4416 BEWARE PROCRUSTES BEARING OCCAM'S RAZOR From jtang at COGSCI.BERKELEY.EDU Wed Oct 14 19:47:06 1998 From: jtang at COGSCI.BERKELEY.EDU (Joyce Tang Boyland) Date: Wed, 14 Oct 1998 12:47:06 -0700 Subject: lg as lists vs lg as skill Message-ID: The current discussion, despite the theoretical leanings of the participants, seems to retain the presupposition that language consists of lists of words, lists of constructions, lists of sentences. This is a useful presupposition for the purpose of discussing language as a formal system, which has its place; but I would like to put up for consideration the idea that language be thought of not as a set of lists, but as a skill. Empirical language acquisition researchers may not be thrilled by the simple theory from skill acquisition that all skills start out as computed and then are stored, but ACT-R (the current version of John Anderson's theory of skill acquisition) is set up to handle more complex transitions between stored and computed representations. For example one can learn a rule from stored examples, but one also then turns instantiations of rules into stored examples. A form like `brokeded' would be straightforwardly explained in ACT-R, as a case of a rule being applied to a form that got stored after being used previously. A lesson that ACT-R teaches us, I think, is that, well, sometimes you need to compute a form and sometimes you need to retrieve it from storage, but where these forms come from is not what we really care about; what we are really trying to explain is the skill (language using) that these forms serve. And as you apply the skill you just happen to do many things by rote and some things by computation. Something this view buys us is that language competence (as in accepting vs. rejecting a string) isn't the main thing, with performance being the by-product; rather, the performance is the main thing and the competence (the lists of accepted vs. rejected strings) is the by-product. I'm just beginning a project (with Eric Scott and perhaps John Anderson) to model the creation of collocations as a product of skill learning in ACT-R,. to complement my recent work on long-term syntactic priming (evidence that stored forms are used in production). Joyce Tang Boyland Joyce.Tang.Boyland at alverno.edu Alverno College Milwaukee, WI 53234-3922 >> >> On the other hand, the simple, old-fashioned idea of "regularization", >> replacing an exception with a form based on the application of a regular >> rule, doesn't predict attested forms of the sort: _It got brokeded_ >> (meaning "It got broken"). You probably wouldn't want to say that the child >> uttering _brokeded_ has added an uninflected root _broked_ to her lexicon >> to which she is now adding the suffix -ed. (I won't vouch for the exact >> example, but I've seen things like that--maybe it was "tookeded".) This >> really looks more like the result of the convergence on a particular form >> in response to various pressures in the system. This could be called >> "computation" but may also hint that the dichotomy of storage vs. >> computation is already a bit off track. >> >> Greg Thomson >> >> Barsalou via Tomasello: >> Finally, this tradeoff between computation and storage is manifest in many >> current theories of skill. Essentially, novices are viewed as having >> stored few cases, and so have to compute, whereas experts are viewed as >> having stored many cases, and so don't have to compute (they just >> retrieve). This goes back to Chase and Simon's work on chunking and chess, >> and it can be found in the modern theories of John Anderson (ACT*), Gordon >> Logan (exemplar-based skill model), and Alan Newell (SOAR). Each includes >> two ways of producing a behavior--computation vs. retrieval--and assumes >> that novices mostly compute but increasinly retrieve as they become >> expert. >> From macw at CMU.EDU Wed Oct 14 23:34:12 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Wed, 14 Oct 1998 17:34:12 -0600 Subject: rote vs rules In-Reply-To: <199810141947.MAA00233@cogsci.Berkeley.EDU> Message-ID: A few further comments on the current discussion of automaticity: 1. Suzanne Kemmer's question was never answered. She asked why rules don't get applied to frequent forms, if they are so computationally efficient. The answer, I would suggest, is that computational efficiency is defined over the whole system, not just the individual form. You don't save in terms of time to produce "jumped". However, you don't have to store all those pesky regular forms and, since the rules are running all the time anyway, you "get jumped for free". Of course, the real problem here is that evidence for a cycle of rules of the SPE type is nonexistent. So language-as-rules people like Pinker decided to give up the battle for generating forms from minor rules and staked their claim on a defense of what I call "kinder gentler rules" such as "add -ed". 2. The analysis that Liz and others have proposed is basically what MacWhinney 1978 and then Menn and MacWhinney 1983 offered as the three-factor account based on rote, analogy, and combination. Connectionism came along in the 1980s and showed how analogy works. Rote is obviously alive and well. Combination has taken a few hits, but is probably not down for the count. It will get resurrected when connectionist models become more neuronally realistic. I don't think that we will ever really need rules. In fact, I doubt that Larry Barsalou thinks we need rules of the SPE/cycle variety. 3. I agree with Joyce that language is a skill. However, the devil is in the details. If we fail to recognize the fundamental difference between word learning and syntactic automatization, I am worried that we could go down some false paths. The routinization of the word is supported by a tightly predictive association between audition and articulation. When we hear a new auditory form, it appears that we use the phonological loop on some level to store it. As we then attempt to match to this form using our own articulations, we convert a resonant auditory form to an entrenched articulatory form. Work by Baddeley, Gupta, Cowan and others has taught us a great deal about the details of this process. Yes, you can use ACT-R to model this, but you will be using a restricted subset of ACT-R and the process of deciding what restrictive subset is applicable is the whole of the scientific process of understanding the neuronal mechanics of word learning. Trying to use a model of word learning as the basis for understanding the automatization of syntactic patterns strikes me as quite problematic. The central problem is that predicates have open slots for arguments. Words, as Wally notes, are largely slot-free (of course there are exception, such as infixes etc.). I tend to think of this level of skill automaticity in terms of Michael Jordan faking out Karl Malone in the last points of the final game of the NBA finals. Jordan clearly has a flexible set of plans for dunking the ball into the basket against the opposition of a defender. What is automatic in his actions is the move from one state to the next. The skill is in the transitions. It strikes me that sentence production is like this and that word level articulation is basically not. Saying that we have stored syntactic frames tends to obscure this contrast. The claim is typically grounded on results from a nice set of studies from Bock and her colleagues. But I would suggest that these studies do not demonstrate syntactic persistence, but rather lexical persistence produces priming of closely competing syntactic options. Barbara Luka presented a nice paper on syntactic persistence at CSLD-4 and mentioned work by Joyce demonstrating similar effects. However, I don't think this work has yet yielded a clear view of what syntactic persistence really might be. Is it a genre effect? Does it involve a passive taperecorder that influences acceptability, but has no direct effect on production? Is it really lexically driven? Many questions remain. I would say that the delineation of the contrast between lexical and syntactic automaticity and productivity should be a top-level research agenda item for functionalists and psycholinguists alike. The great thing about all of this is that the issues are easily open to experimentation and modeling. And, as Joan Bybee, Tom Givon, and others have been showing, they make clear predictions regarding typology and change. --Brian MacWhinney From David_Tuggy at SIL.ORG Wed Oct 14 00:05:00 1998 From: David_Tuggy at SIL.ORG (David_Tuggy at SIL.ORG) Date: Tue, 13 Oct 1998 19:05:00 -0500 Subject: Storage and computation Message-ID: Deborah Ruuskanen wrote: "Machine translation has tried to use ever[y] larger memories to store and retrieve translations once made and match them against translations to be made: this method simply does not work if the translation to be done is not an *exact* match. So much for retrieval. However, if it *is* an exact match, then retrieval saves mucho time ..." So much for machine retrieval, perhaps. But do we need to posit that humans doing retrieval are as literal-minded as computers are? It would seem self-evident that we excel at non-exact matching. Maybe another way to say it is that we apparently prefer to add a little computation to our retrieval system to make it much more flexible and efficient, rather than to invest a great deal more computation starting over from scratch. Once again, if we set up computation and retrieval as either/or alternatives, we're setting ourselves on the wrong track. People do both, and typically at the same time. Certainly things are weighted, as Bill Croft and Wally Chafe and others have been saying, much more heavily towards retrieval than the "generative" metaphor would lead you to expect. A closely related issue: what are we matching, anyway? Probably something vastly different from the patterns of 0's and 1's or higher-level letters that are all the computer knows. What is not an exact phonological (much less phonetic) or even lexico-syntactic match may be much more nearly an exact match of the somewhat sloppy semantic stuff we're usually primarily comparing in translation. Even the phonological and lexico-semantic stuff is almost certainly not be stored in as rigid a form as a computer would do it. --David Tuggy ______________________________ Reply Separator _________________________________ Subject: Storage and computation Author: druuskan at CC.HELSINKI.FI at internet Date: 10/13/98 10:07 AM From eleanorb at HUMAN.TSUKUBA.AC.JP Thu Oct 15 00:22:24 1998 From: eleanorb at HUMAN.TSUKUBA.AC.JP (Eleanor Olds Batchelder) Date: Thu, 15 Oct 1998 09:22:24 +0900 Subject: Storage vs. computation Message-ID: Is it just a coincidence that this (fascinating) discussion is taking place shortly before the UTRECHT CONGRESS ON STORAGE & COMPUTATION IN LINGUISTICS next week? Certainly the terminology is the same - as opposed to, say, rote vs. rule. Since this topic is the one most central to my work just now, I am very sorry I cannot attend that conference. Would anyone who is going be willing to send back reports to the rest of us? Eleanor From jkyle at EAGLE.CC.UKANS.EDU Thu Oct 15 18:42:41 1998 From: jkyle at EAGLE.CC.UKANS.EDU (John Kyle) Date: Thu, 15 Oct 1998 13:42:41 -0500 Subject: sound symbolism and features In-Reply-To: Message-ID: An interesting example of sound symbolism occurs in many of the Siouan languages. Boas and Deloria's Dakota Grammar (1941) has several pages of examples (pp16-18) where the 'degrees' of a verb are differentiated with the use of different fricatives. They note that this is not an active process and the meanings are not always predictable. In my notation below; [s^] is a vless palatal fric, [z^] is the voiced pal fric, [x] is vless velar fric, [g^] is voiced velar fric, also, nasal vowels are shown with an [n] following ([in] = nasal [i]). I've only listed a few here, they list many more. sapa black s^apa soiled xapa grey winz^a bent w/out breaking (i.e. twig) wing^a bent at a sharp angle ptuza it is bent forward ptuz^a small pieces cracked off w/out falling off ptug^a " " " " but fall off woptux'a crumbs izuza whetstone ig^ug^a rough sandstone nuza soft and movable (a swollen gland) nuz^a same but harder (cartilage) nug^a hard like a callus on bone, gnarl on a tree I hope these help. At least they're interesting. Bob Rankin also informs me that many of the Muskogee languages do this also. John Kyle On Tue, 13 Oct 1998, Johanna Rubba wrote: > Hi, everybody. I've been following the computation/storage discussion with > interest. I have a question in a different area. > > I'm teaching a grad intro ling course for people interested mainly in > literature, and I do a lot of ling. analysis of lit. in this class (which > is loads of fun, by the way!) We're just finishing our unit on phonology > and I've been cruising around the web for stuff on sound symbolism. Maybe > some of you know of some sources on a very specific area I am interested > in: the correlation of particular _distinctive features_ with properties > in other sensory domains (e.g. of +continuant with 'smooth' or 'velarized' > with 'dark'). I know that _segments_ have received lots of attention, and > I've seen some initial signs of work with features on commercial websites > (creators of corporate names and brand names). Does anyone know of work > that seeks empirical confirmation of cross-modal associations for > particular _features_ rather than segments (by, for example, manipulating > feature makeup of sounds/words and surveying scientifically-sound subject > pools for consistency of association)? Work being done across cultures > would, of course, be really interesting. > > Thanks for any leads you can offer! > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Johanna Rubba Assistant Professor, Linguistics ~ > English Department, California Polytechnic State University ~ > San Luis Obispo, CA 93407 ~ > Tel. (805)-756-2184 Fax: (805)-756-6374 ~ > E-mail: jrubba at polymail.calpoly.edu ~ > Home page: http://www.calpoly.edu/~jrubba ~ > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > From dquesada at CHASS.UTORONTO.CA Fri Oct 16 03:28:11 1998 From: dquesada at CHASS.UTORONTO.CA (Diego Quesada) Date: Thu, 15 Oct 1998 23:28:11 -0400 Subject: Job: University of Toronto (fwd) Message-ID: ---------- Forwarded message ---------- Date: Thu, 15 Oct 1998 22:46:58 -0400 (EDT) From: Hispanic Linguistics UofT To: linguist at listserv.linguistlist.org Subject: Job: University of Toronto PLEASE POST Tenure-track position in Hispanic linguistics and language at Assistant Professor level, effective 1 July 1999. Required: Ph.D. in Spanish or Linguistics; committment to research and scholarly publication in theoretical linguistics, with a strong secondary interest in applied linguistics and second-language pedagogy; willingness to develop and supervise introductory courses in the Spanish language sequence; native or near native proficiency in Spanish. Experience in multimedia computer technology is an asset. Applications must be received by December 4, 1998. Please send a letter of application and curriculum vitae and arrange to have three letters of reference sent to Professor Stephen Rupp, Chair, Department of Spanish and Portuguese, University of Toronto, Toronto, Ontario, Canada M5S 1A1. In accordance with Canadian immigration requirements, this advertisement is directed to Canadian citizens and permanent residents of Canada. In accordance with its employment equity policy, the University of Toronto encourages applications from qualified women or men, members of visible minorities, aboriginal peoples and persons with disabilities. Visit the Hispling Toronto site at: http://www.chass.utoronto.ca/spanish_portuguese/hispling.html Hispanic Linguistics University of Toronto From kfeld at CITRUS.UCR.EDU Fri Oct 16 05:23:52 1998 From: kfeld at CITRUS.UCR.EDU (David B. Kronenfeld) Date: Fri, 16 Oct 1998 00:23:52 -0500 Subject: Larry Barsalou's note via Mike Tomasello Message-ID: In reference to Larry Barsalou's very interesting information regarding exemplars vs. prototypes, it is worth noting that the exemplar/prototype distinction has not been made within much semantic work in anthropology, where the contrast has been "prototype" ("kernel" or "core")-based definitions vs. distinctive feature definitions of whole categories. Discussions of "prototypes" in anthropology may really, to a greater or lesser degree, pertain to Barsalou's exemplars; it will be necessary to consider the ways that the "prototypes" in question are actually defined and used in any given case to determine how they relate to the exemplar/prototype distinction. I offer this observation because there seems some possibility of useful insights coming from both directions, and it would be a shame if such exchange were short-circuited by a labeling glitch. From haspelmath at EVA.MPG.DE Fri Oct 16 13:23:23 1998 From: haspelmath at EVA.MPG.DE (Martin Haspelmath) Date: Fri, 16 Oct 1998 13:23:23 +0000 Subject: storage/retrieval/computation Message-ID: I'd like to get back to Bill Croft's original question regarding parsimony of storage/retrieval/computation. I have two comments: First, it should be noted that the claim that "storage is easier than computation" presupposes some implicit comparison. If we say that "kids learn huge numbers of words very quickly, but they're very slow to learn general constructional patterns" (Suzanne Kemmer), how do we judge that the first is "quick", the second "slow" -- by what standards? It seems to me that the implicit comparison is to a large extent the conventional serial computer. Compared to computers, humans are bad at processing and good at storage. Even ten years ago (let alone today), computers were able to carry out a much larger number of successive operations per second than humans. Since the "cognitivist revolution" in the 1950s, the serial computer has served as an important point of reference for understanding human cognition. Second, Bill Croft opposed storage/retrieval to computation, but it isn't clear to me that this is the right contrast. Maybe it's mainly storage that is easy, whereas processing in general, both retrieval and computation, are more difficult. That would allow us, for instance, to preserve Haiman's argument (in "Natural syntax", CUP 1985) that the massive polysemy we find in language is economically motivated. After all, if we're so good at storage, why should we need to economize in the lexicon? Humans can easily learn 5-10 languages (if given the opportunity), so why not have a lexicon that is 5-10 times as large? Maybe the motivation is processing parsimony after all. In a huge lexicon, retrieval can be quite difficult, even though there is enough storage space. All this is pure speculation, of course, and it would be interesting to see whether the psychologists have anything to say about it. --Martin Haspelmath -- Martin Haspelmath (haspelmath at eva.mpg.de) Max-Planck-Institut fuer evolutionaere Anthropologie, Inselstr. 22 D-04103 Leipzig (Tel.+49-341-9952 307) -- Dr. Martin Haspelmath (haspelmath at eva.mpg.de) Max-Planck-Institut fuer evolutionaere Anthropologie, Inselstr. 22 D-04103 Leipzig (Tel. (MPI) +49-341-9952 307, (priv.) +49-341-980 1616) From macw at CMU.EDU Fri Oct 16 17:39:14 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Fri, 16 Oct 1998 11:39:14 -0600 Subject: exemplars and prototypes Message-ID: Regarding David Kronenfeld's note on exemplars and prototypes and the possibility of terminological slippage, let me say that the distinction is fairly clear in the psychological literature. An exemplar is a specific real-world instance, i.e. a particular dog or a particular candle. A prototype is a merger of the best or common features of the many exemplars. David is referring to the contrast in cognitive anthropology between featural theory and prototype theory. This contrast also exists in psychology and many papers have been written arguing for one or the other, but no one really challenges the potential relevance of exemplars during the initial phases of induction. The issue is whether the role of exemplars in the final system is secondary and peripheral or major and central. In any case, I don't sense any terminological slippage. Instead, I think there is a basic disagreement in both fields regarding (1) the relative importance of exemplars and (2) the decision to opt for feature theory vs. prototype theory. The range of my reading in cognitive anthropology is fairly restricted, so I am happy to stand corrected on this. --Brian MacWhinney From noonan at CSD.UWM.EDU Fri Oct 16 16:01:26 1998 From: noonan at CSD.UWM.EDU (Michael Noonan) Date: Fri, 16 Oct 1998 11:01:26 -0500 Subject: Job at University of Wisconsin-Milwaukee In-Reply-To: Message-ID: Assistant Professor in TESOL pedagogy and contrastive rhetoric. Tenure track, to begin Fall 1999. We seek an outstanding scholar/teacher who will actively contribute to our graduate programs in Linguistics/TESOL and Composition/Rhetoric, as well as a professional writing program and an undergraduate English composition program which responds to the needs of an ethnically and linguistically diverse population. Experience in developing and/or administering a university-level TESOL program preferred. We plan to interview at MLA. Send letter of application and CV only to Michael Noonan, Chair, Dept. of English, University of Wisconsin-Milwaukee, Milwaukee, WI 53201, postmarked no later than January 10. Questions about the position, the department, and UWM may be addressed to Michael Noonan . AA/EO From lakoff at COGSCI.BERKELEY.EDU Fri Oct 16 16:50:09 1998 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Fri, 16 Oct 1998 09:50:09 -0700 Subject: Exemplars and prototypes Message-ID: Re: David Konenfeld's and Brian MacWhinney's remarks on exemplars and prototypes: The place to find out about examplars versus prototypes is in my Women, Fire and Dangerous Things, Chapters 2 through 7. There are many different types of prototypes, each with different inference patterns (e.g., typical cases, ideal cases, social stereotypes, centers of radial categories, etc.) and various types of exemplars, again with different inference patterns (e.g., paragons, salient examplars, antiparagons, etc.). Psychologists have been fairly sloppy in not distinguishing among the different logical types -- largely,I think, because they tend not to study inferences. I find this fairly bizarre, because inferences are what reasoning is about. George Lakoff From kfeld at CITRUS.UCR.EDU Fri Oct 16 16:48:42 1998 From: kfeld at CITRUS.UCR.EDU (David B. Kronenfeld) Date: Fri, 16 Oct 1998 11:48:42 -0500 Subject: exemplars and prototypes Message-ID: In case it's useful, let me briefly respond to Brian MacWhinney's helpful comments. At 11:39 AM 10/16/98 -0600, Brian MacWhinney wrote: > Regarding David Kronenfeld's note on exemplars and prototypes and the >possibility of terminological slippage, let me say that the distinction is >fairly clear in the psychological literature. An exemplar is a specific >real-world instance, i.e. a particular dog or a particular candle. A >prototype is a merger of the best or common features of the many exemplars. This is helpful to me, at least. >David is referring to the contrast in cognitive anthropology between >featural theory and prototype theory. Within cognitive anthropology is where the prototype/exemplar distinction seems not much to be made. Some versions of prototype theory here, coming off of Rosch's work in psychology, do make use of "prototypes" in Brian's sense; but others (including my own "extentionist semantics" approach) speak of "prototypes" in a sense that is much closer to Brian's sense of "exemplars". I have sometimes used "prototypic referent" (as opposed to a "typical" one) as a way of describing a referent or exemplar that is key (as opposed to some kind of average or most frequent referent)--where I do discuss the basis behind this key role for the given referent. "Core" and "kernel" are also used to characterize such key referents. > This contrast also exists in >psychology and many papers have been written arguing for one or the other, >but no one really challenges the potential relevance of exemplars during >the initial phases of induction. The issue is whether the role of >exemplars in the final system is secondary and peripheral or major and >central. Yes. > In any case, I don't sense any terminological slippage. Instead, I think >there is a basic disagreement in both fields regarding (1) the relative >importance of exemplars and (2) the decision to opt for feature theory vs. >prototype theory. Yes. All I meant was that the major focus within anthropological discussions has been on the opposition between feature models and focal referent models; the prototype vs. exemplar distinction within focal models has not much been raised. > The range of my reading in cognitive anthropology is fairly restricted, >so I am happy to stand corrected on this. > >--Brian MacWhinney > Thanks for the information. The exemplar/prototype distinction seems quite helpful. David Kronenfeld From macw at CMU.EDU Fri Oct 16 19:32:04 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Fri, 16 Oct 1998 13:32:04 -0600 Subject: exemplars and prototypes Message-ID: I agree with David and George's comments. I particularly agree with George's observation that psychologists have failed to introduce the needed additional terminology to deal with the different logical possibilities. Of course categorization people like Hintzman, Kruschke, Nosofsky, and maybe even Barsalou might argue that one doesn't know that an exemplar is a paragon during early acquisition, and that it only becomes a paragon after the pool of exemplars is given prototype structure. But all these further distinctions nicely facilitate thinking and theory, as George is saying. One distinction that may help David a bit is the contrast between the abstract prototype (the statistical mean of the features) and an instantiated prototype (perhaps something like George's paragon -- i.e. the robin as an "ideal" bird). --Brian MacWhinney From Ziv at HUM.HUJI.AC.IL Sun Oct 18 03:35:00 1998 From: Ziv at HUM.HUJI.AC.IL (Ziv Yael) Date: Sat, 17 Oct 1998 20:35:00 PDT Subject: Amnesty International Message-ID: ------------------------------------------------------------------------------ FORWARDED FROM: Ziv Yael Return-Path: Date: Thu, 15 Oct 1998 09:35:10 +0000 (GMT) From: Steve Nicolle Subject: Amnesty International To: relevance at linguistics.ucl.ac.uk Reply-To: S.Nicolle at mdx.ac.uk Message-Id: Organization: Middlesex University Mime-Version: 1.0 X-Mailer: Pegasus Mail for Windows (v2.53/R1) Content-Type: text/plain Content-Transfer-Encoding: 7BIT Priority: normal content-length: 964 Dear Friends, To celebrate the 50th Anniversary of the Universal Declaration of Human Rights, Amnesty International is collecting signatures for a pledge to support this very important United Nations declaration. Amnesty already has 3 million signatures (real and virtual) world wide, and wants 8 million (which would be a significant proportion of the world's population of around 6 billion). The UN Secretary General has already agreed to be present either in person or live by satellite to receive the pledge as a tangible statement of the people of the world's commitment to an international agenda of human rights. The most simple way to add your name to the pledge is to: Send an e-mail to udhr50th at amnesty.org.au Put YOUR NAME in the SUBJECT Put the following text in the message: 'I support the rights and freedoms in the Universal Declaration of Human Rights for all people, everywhere'. PLEASE FORWARD THIS MESSAGE TO AS MANY PEOPLE AS YOU CAN. From kfeld at CITRUS.UCR.EDU Sat Oct 17 20:39:57 1998 From: kfeld at CITRUS.UCR.EDU (David B. Kronenfeld) Date: Sat, 17 Oct 1998 15:39:57 -0500 Subject: George Lakoff on Exemplars and prototypes Message-ID: At 09:50 AM 10/16/98 -0700, George Lakoff wrote: ... > >There are many different types of prototypes, each with different inference >patterns >(e.g., typical cases, ideal cases, social stereotypes, centers of radial >categories, etc.) and >various types of exemplars, again with different inference patterns (e.g., >paragons, salient examplars, antiparagons, etc.). In the sense that this list represent ways in which some item can be focal within the set of referents of some expression, and thus represents kinds of focality that we should be aware of in our research, I agree. At the same time, though, I want to warn against the possibility of taking this list (others like it) as too directly representing the distinctions that inhere in the phenomena themselves. Until we have a better understanding of the reasoning processes involved, including the underlying abilities and perceptions these reasoning processes build on, such a taxonomy seems premature. Relevant here is the question concerning the degree to which we sort potential referents into some pre-existing kinds of relational categories vs. construct our representations of relationships among referents in some more constructivist or even ad hoc manner. > >Psychologists have been fairly sloppy in not distinguishing among the >different logical types -- largely,I think, because they tend not to study >inferences. I find this fairly bizarre, because inferences are what >reasoning is about. > Yes ! David Kronenfeld From lili at IVY.NENU.EDU.CN Sun Oct 18 01:52:44 1998 From: lili at IVY.NENU.EDU.CN (lili) Date: Sun, 18 Oct 1998 09:52:44 +0800 Subject: RELATIONAL PROCESS Message-ID: Hi! Can anybody tell me the way to prove that there is a strong link between the quantities of information ( how much meaning the speaker intended to convey, which is represented through the text) and the real intention of the speaker (the intened bias of the speaker, he may be likely to cheat)? Namely, whether the amount of information has something to do with the reporter's likes or dislikes? This question seems to have a lot to do with the critical linguistics which takes a lot resources from the functional grammar. Thanks!! Li Li Foreign Languages Institute of Northeast Normal University Changchun, Jilin P.R. China Code:130024 Tel: 86-0431-5649962 Leonard_li at bigfoot.com From David_Tuggy at SIL.ORG Sat Oct 17 18:13:00 1998 From: David_Tuggy at SIL.ORG (David_Tuggy at SIL.ORG) Date: Sat, 17 Oct 1998 13:13:00 -0500 Subject: Larry Barsalou's note via Mike Tomasello Message-ID: The same is true in quite a bit of "Cognitive linguistics" work (including some I have written)--what is billed as "prototype" categorization is reacting to distinctive-feature or strict-boundary categorization, and does not have the prototype/exemplar distinction in mind. --David Tuggy ______________________________ Reply Separator _________________________________ Subject: Larry Barsalou's note via Mike Tomasello Author: kfeld at CITRUS.UCR.EDU at internet Date: 10/16/98 12:23 AM In reference to Larry Barsalou's very interesting information regarding exemplars vs. prototypes, it is worth noting that the exemplar/prototype distinction has not been made within much semantic work in anthropology, where the contrast has been "prototype" ("kernel" or "core")-based definitions vs. distinctive feature definitions of whole categories. Discussions of "prototypes" in anthropology may really, to a greater or lesser degree, pertain to Barsalou's exemplars; it will be necessary to consider the ways that the "prototypes" in question are actually defined and used in any given case to determine how they relate to the exemplar/prototype distinction. I offer this observation because there seems some possibility of useful insights coming from both directions, and it would be a shame if such exchange were short-circuited by a labeling glitch. From David_Tuggy at SIL.ORG Sat Oct 17 20:43:00 1998 From: David_Tuggy at SIL.ORG (David_Tuggy at SIL.ORG) Date: Sat, 17 Oct 1998 15:43:00 -0500 Subject: exemplars and prototypes Message-ID: Must exemplars be "specific real-world instance[s], i.e. a particular dog or a particular candle."? Couldn't the non-particular, less-specific concept GERMAN SHEPHERD be one of the exemplars for a category such as DOG, or VOTIVE CANDLE IN A SMALL GLASS be an exemplar for CANDLE? I can't (at least consciously) recall any particular such candle, yet I would have said that kind of candle was, for me, an exemplar of the category. How would you tell for sure if a person in a psycholinguistic experiment was responding to the concept MY (GERMAN SHEPHERD) DOG DUCHESS or to DUCHESS AND DOGS LIKE HER? Couldn't a non-real-world, and generic, concept such as WOOKIE be an exemplar for ALIEN RACE? In a sense, even something as specific as MY DOG DUCHESS isn't fully specific, but is "a merger of the best or common features of the many exemplar[y]" experiences I had of Duchess. (Sorry--I'm trying to learn how these words are being used. But I suspect some others might have the same questions.) --David Tuggy ______________________________ Reply Separator _________________________________ Subject: exemplars and prototypes Author: macw at CMU.EDU at internet Date: 10/16/98 12:39 PM Regarding David Kronenfeld's note on exemplars and prototypes and the possibility of terminological slippage, let me say that the distinction is fairly clear in the psychological literature. An exemplar is a specific real-world instance, i.e. a particular dog or a particular candle. A prototype is a merger of the best or common features of the many exemplars. David is referring to the contrast in cognitive anthropology between featural theory and prototype theory. This contrast also exists in psychology and many papers have been written arguing for one or the other, but no one really challenges the potential relevance of exemplars during the initial phases of induction. The issue is whether the role of exemplars in the final system is secondary and peripheral or major and central. In any case, I don't sense any terminological slippage. Instead, I think there is a basic disagreement in both fields regarding (1) the relative importance of exemplars and (2) the decision to opt for feature theory vs. prototype theory. The range of my reading in cognitive anthropology is fairly restricted, so I am happy to stand corrected on this. --Brian MacWhinney From macw at CMU.EDU Sun Oct 18 07:17:07 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Sun, 18 Oct 1998 01:17:07 -0600 Subject: exemplars and prototypes In-Reply-To: <199810180430.XAA17801@listserv.rice.edu> Message-ID: --On Sat, Oct 17, 1998 3:43 PM -0500 David_Tuggy at SIL.ORG wrote: > Must exemplars be "specific real-world instance[s], i.e. a particular > dog or a particular candle."? Couldn't the non-particular, > less-specific concept GERMAN SHEPHERD be one of the exemplars for a > category such as DOG, or VOTIVE CANDLE IN A SMALL GLASS be an > exemplar for CANDLE? Sure, that's fine. That would be a subordinate category serving as a part of the database for a superordinate. But it is not what exemplar theories in psychology are assuming. They are assuming some real German Shepherd, not the union of the features of all German Shepherds you have met. Exemplars are real things. Like the third votive candle from the left in my cupboard -- the one with the green tinge and heavy base. By the way, don't all votive candles end up being in small glasses? --Brian MacWhinney From kfeld at CITRUS.UCR.EDU Sun Oct 18 05:46:37 1998 From: kfeld at CITRUS.UCR.EDU (David B. Kronenfeld) Date: Sun, 18 Oct 1998 00:46:37 -0500 Subject: George Lakoff on Exemplars and prototypes Message-ID: At 09:50 AM 10/16/98 -0700, George Lakoff wrote: ... > >There are many different types of prototypes, each with different inference >patterns >(e.g., typical cases, ideal cases, social stereotypes, centers of radial >categories, etc.) and >various types of exemplars, again with different inference patterns (e.g., >paragons, salient examplars, antiparagons, etc.). In the sense that this list represent ways in which some item can be focal within the set of referents of some expression, and thus represents kinds of focality that we should be aware of in our research, I agree. At the same time, though, I want to warn against the possibility of taking this list (or others like it) as too directly representing the distinctions that inhere in the phenomena themselves. Until we have a better understanding of the reasoning processes involved, including the underlying abilities, perceptions, and presuppositions these reasoning processes build on, such a typology seems premature. Relevant here is the question concerning the degree to which we sort potential referents into some pre-existing kinds of relational categories vs. construct our representations of relationships among referents in some more constructivist or even ad hoc manner. > >Psychologists have been fairly sloppy in not distinguishing among the >different logical types -- largely,I think, because they tend not to study >inferences. I find this fairly bizarre, because inferences are what >reasoning is about. > Yes ! David Kronenfeld From chafe at HUMANITAS.UCSB.EDU Sun Oct 18 23:29:19 1998 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Sun, 18 Oct 1998 16:29:19 -0700 Subject: exemplars and prototypes In-Reply-To: <479289.3117662227@jubilation.psy.cmu.edu> Message-ID: On Sun, 18 Oct 1998, Brian MacWhinney wrote: > Exemplars are real things. Like the third votive candle from the left in > my cupboard -- the one with the green tinge and heavy base. It seems to me that the distinction here should not be based on "reality" (whatever that is) vs. unreality, but on particular instances vs. categories. Imagined entities can be particulars just as well as real ones. Moby Dick is as particular a whale as Willy. Wally Chafe From susan at LING.UTA.EDU Wed Oct 21 21:26:26 1998 From: susan at LING.UTA.EDU (Susan Herring) Date: Wed, 21 Oct 1998 16:26:26 -0500 Subject: New Ph.D. program in Linguistics Message-ID: * NEW DOCTORAL PROGRAM IN LINGUISTICS * * AND OPPORTUNITIES FOR STUDENT SUPPORT * * OFFERED AT THE UNIVERSITY OF TEXAS AT ARLINGTON* The University of Texas at Arlington (UTA) announces the availability of four supported doctoral positions for new students entering the UTA Ph.D. program in Linguistics in Spring 1999. The Ph.D. in Linguistics at UTA, among the newest doctoral programs in linguistics available in the United States, provides students with education and training in a range of specializations, including discourse analysis and text linguistics, sociolinguistics, semantics and translation, and literacy. Special attention is given to the role of field work in linguistic studies, including the study and documentation of lesser-studied languages. Training is also provided in the application of computing methods to linguistic analysis. Supported doctoral positions will be awarded on a competitive basis to new students accepted into the program on or before January 12, 1999. Support will likely take the form of research assistantships (no teaching required) involving contributing to the research activities of the Program in Linguistics according to the program's needs and students' background, interests, and skills. Successful candidates will be guaranteed support for the spring semester, and be eligible for continuing support in subsequent academic years. In addition to the new Ph.D., the Linguistics Program at UTA continues to offer an M.A. in Linguistics as well as a 19-hour Graduate Certificate in TESOL. For further information about graduate study in linguistics at UTA or to request an application for admission and support to the doctoral program, contact the Linguistics Graduate Advisor, Dr. Irwin Feigenbaum, at irwin at ling.uta.edu or (817) 272-3133. Information on degree requirements, faculty, and course offerings is available on the UTA Linguistics web site at http://ling.uta.edu. The University of Texas at Arlington, the second-largest campus in the University of Texas System, is located in the center of the Dallas-Fort Worth Metroplex, a major urban and cultural area in the United States. Information about the University of Texas at Arlington is available at http://www.uta.edu. From eitkonen at UTU.FI Thu Oct 22 10:51:48 1998 From: eitkonen at UTU.FI (Esa Itkonen) Date: Thu, 22 Oct 1998 13:51:48 +0300 Subject: psychological reality Message-ID: The dichotomy 'concrete forms in storage & short computations vs. abstract forms in storage & long computations' was THE issue when the question of psychological reality was discussed in the 70's (and the question is still with us today). Per Linell's 300-page book 'Psychological reality in phonology' (CUP 1979) is devoted to this issue (see e.g. the section on 'Demand for excessive computing'). I don't think it would hurt anybody to have a look at this book, which shows, once again, that very often (although not always) what we would like to see as progress is nothing but ignorance of the past. The dichotomy 'memorization vs. rule-generalization (i.e. analogy)' is a somewhat separate issue because it concerns learning; and something that has first been learned by analogy can later become memorized as such. Esa Itkonen From charon at UCLINK4.BERKELEY.EDU Fri Oct 23 06:57:19 1998 From: charon at UCLINK4.BERKELEY.EDU (charon at UCLINK4.BERKELEY.EDU) Date: Thu, 22 Oct 1998 23:57:19 -0700 Subject: 2nd BLS Call for Papers Message-ID: Please distribute the following announcement to all interested parties. THE BERKELEY LINGUISTICS SOCIETY BLS 25 CALL FOR PAPERS The Berkeley Linguistics Society is pleased to announce its Twenty-Fifth Annual Meeting, to be held February 13-15, 1999. The conference will consist of a General Session and a Parasession on Saturday and Sunday, followed by a Special Session on Monday. **************************************************************************** *** General Session: The General Session will cover all areas of general linguistic interest. Invited Speakers CAROL FOWLER, Haskins Laboratories, Univ. of Connecticut, Yale Univ. STEPHEN LEVINSON, Max Planck Institut f?r Psycholinguistik, Nijmegen BJ?RN LINDBLOM, Univ. of Stockholm and Univ. of Texas, Austin ALEC MARANTZ, Massachusetts Institute of Technology **************************************************************************** *** Parasession: Loan Word Phenomena The Parasession invites papers on loan word phenomena from various theoretical, historical, sociolinguistic, and typological perspectives, as well as descriptive works and field reports. Areas of interest include stratification of the lexicon and loan word 'subgrammars', re-lexification, the role of orthography, markedness effects, second-language acquisition, child language, bilingualism and code-switching, etc. Invited Speakers ELLEN BROSELOW, State University of New York, Stony Brook GARLAND CANNON, Texas A&M University JUNKO ITO & ARMIN MESTER, University of California, Santa Cruz **************************************************************************** *** Special Session: Issues in Caucasian, Dravidian and Turkic Linguistics The Special Session will feature research on Caucasian, Dravidian and Turkic languages. Papers addressing both diachronic and synchronic issues are welcome. Potential topics include theoretical and descriptive accounts of structural features, writing systems and transcription problems, language reform, and the reconstruction of the respective Proto-languages, including the question of Altaic linguistic unity. Invited Speakers LARS JOHANSON, Universit?t Mainz K.P. MOHANAN, National University of Singapore JOHANNA NICHOLS, University of California, Berkeley **************************************************************************** *** We encourage proposals from diverse theoretical frameworks and welcome papers from related disciplines, such as Anthropology, Cognitive Science, Computer Science, Literature, Philosophy, and Psychology. Papers presented at the conference will be published in the Society's Proceedings, and authors who present papers agree to provide camera-ready copy (not to exceed 12 pages) by May 15, 1999. Presentations will be allotted 20 minutes with 10 minutes for questions. We ask that you make your abstract as specific as possible, including a statement of your topic or problem, your approach, and your conclusions. Please send 10 copies of an anonymous one-page (8 1/2" x 11", unreduced) abstract. A second page, or reverse side of the single page, may be used for data and references only. Along with the abstract send a 3"x5" card listing: (1) paper title, (2) session (general, Parasession, or special), (3) for general session abstracts only, subfield, viz., Discourse Analysis, Historical Linguistics, Morphology, Philosophy and Methodology of Linguistics, Phonetics, Phonology, Pragmatics, Psycholinguistics, Semantics, Sociolinguistics, or Syntax, (4) name(s) of author(s), (5) affiliation(s) of author(S), (6) address to which notification of acceptance or rejection should be mailed (in November 1998), (7) author's office and home phone numbers, (8) author's e-mail address, if available. An author may submit at most one single and one joint abstract. In case of joint authorship, one address should be designated for communication with BLS. Send abstracts to: BLS 25 Abstracts Committee, 1203 Dwinelle Hall, University of California, Berkeley, CA 94720. Abstracts must be received by 4:00 p.m., November 2, 1998. We may be contacted by e-mail at bls at socrates.berkeley.edu. Information on e-mail submission and additional guidelines for abstracts can be found at our web site (http://www.linguistics.berkeley.edu/BLS). We will not accept faxed abstracts. Registration Fees: Before February 5, 1999; $15 for students, $30 for non-students; After February 5, 1999; $20 for students, $35 for non-students. From dick at LINGUISTICS.UCL.AC.UK Fri Oct 23 15:52:33 1998 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Fri, 23 Oct 1998 16:52:33 +0100 Subject: Deacon Message-ID: I've just read Terrence Deacon's lovely book "The Symbolic Species", and wondered if anyone could point me to a review either by a linguist or by a supporter of Chomsky or Pinker's view of innateness. I'd also be interested in the views of anyone on this list. ============================================================================== Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From mackenzi at LET.VU.NL Mon Oct 26 14:32:47 1998 From: mackenzi at LET.VU.NL (J.L. Mackenzie) Date: Mon, 26 Oct 1998 14:32:47 MET Subject: Deacon Message-ID: Like Dick Hudson, I too have just read and enormously enjoyed Terrence Deacon's *The Symbolic Species* (Penguin, 1997). Not only is the book very well written, but it also is the most eloquent, erudite and effective debunking of the nativist position on the language abilities of homo sapiens that I have seen. He takes Peirce's distinction between icon, index and symbol and argues that, against the evolutionary odds, the ancestors of the human being 2m years ago acquired the power of symbolic thought, and claims that language developed as an evolutionary consequence. Against the nativist position that "the language faculty must be innate, otherwise no chuild could learn it", Deacon argues -- to me convincingly -- that language structure has a "kid-friendly logic", i.e. has evolved to be such that it can be readily acquired by children with their particular thought processes at their stage of mental development. Indeed, the human being has evolved to become a "savant of language and symbolic learning"; the genetic basis for symbol-learning abilities has become a fixation, i.e. a universal trait of the species. This book, with its claim that language and the brain have co-evolved, seems to me to offer an alternative view on the cognitive status of language that will be attractive and appealing to functionalists and will dovetail with our research findigs as functionalists. Not unusually, Deacon tends to equate linguists with nativists, but he is aware of the functionalist psycholinguistic tradition from Vygotsky to Bates. Dick Hudson asked for reviews. I've seen the following: http://www.nytimes.com/books/97/08/10/reviews/970810.10calvint.html (by William Calvin) and http://www.wam.umd.edu/~mturn/WWW/deacon.html (by Mark Turner) Lachlan Mackenzie Department of English Faculty of Letters Vrije Universiteit De Boelelaan 1105 1081 HV Amsterdam Netherlands tel: +31-20-444 6492 fax: +31-20-444 6500 home phone: +31-20-671 1491 e-mail: mackenzi at let.vu.nl From coulson at COGSCI.UCSD.EDU Tue Oct 27 01:22:05 1998 From: coulson at COGSCI.UCSD.EDU (Seana Coulson) Date: Mon, 26 Oct 1998 17:22:05 -0800 Subject: Deacon Message-ID: While cleaning my desk I turned up another review of Deacon's fantastic book (_The Symbolic Species_) besides the ones mentioned already on the list: Poeppel, David. 1997. Mind over chatter. Nature 388: 734. Poeppel received his Ph.D. from MIT's Department of Brain and Cognitive Science, and in 1997 anyway, was doing a neuroimaging post-doc at UCSF's Biomagnetic Imaging Laboratory. -Seana Coulson From jkyle at EAGLE.CC.UKANS.EDU Thu Oct 29 03:20:53 1998 From: jkyle at EAGLE.CC.UKANS.EDU (John Kyle) Date: Wed, 28 Oct 1998 21:20:53 -0600 Subject: Call for Papers (KWPL) Message-ID: Please Post ***********************Call for Papers************************ *****KANSAS WORKING PAPERS IN LINGUISTICS***** Number 1: General Linguistics Number 2: Studies in Native American Languages Deadline: January 31, 1999 The editors of Kansas Working Papers in Linguistics will produce two numbers of Volume 24, for 1999. We welcome submissions of papers on all topics in the field of linguistics and closely-related disciplines for Number 1. Papers dealing with native languages of the Americas will be selected for Number 2. Since we are a working paper, publication in KWPL does not preclude later publication elsewhere of revised versions of papers. Submissions should be in good readable form (double or 1.5 spaced), not necessarily final copies. Student papers are encouraged. Please include name, address, email address (if possible) when sending correspondence. Please send papers or inquiries to this address: Editors, KWPL Linguistics Department 427 Blake Hall University of Kansas Lawrence, Kansas 66045 e-mail: LGSA at kuhub.cc.ukans.edu ******************************* John Kyle, editor KWPL jkyle at ukans.edu From bralich at HAWAII.EDU Fri Oct 30 22:30:01 1998 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Fri, 30 Oct 1998 12:30:01 -1000 Subject: Ergo Talks with Microsoft Agents (Free Software) Message-ID: Microsoft has recently made some of its agent technology available on the web at http://www.microsoft.com/agents. Most well-known is a 3 D Parrot called "Peedy." Ergo Linguistics has just modified their patented "ChatterBox" technology to make it possible to speak with "Peedy" and the other agents. For those who are interested in viewing this talking desktop agent, we can provide the necessary files for a user that will set everything up and put the "Peedy" icon on the desktop. The "ChatterBox.exe" file will set up ChatterBox which will automatically allow you to speak to Peedy. Once you set up ChatterBox and the "Peedy" in this setup file Just type in sentences like the following and you can ask the corresponding questions. John gave mary a book because it was her birtbday did John give mary a book what did john give mary who gave mary a book who did john give a book why did john give mary a book the tall dark stranger is carrying a bloody knife what is the stranger doing what is the stranger carrying was the stranger carrying a knife you saw the tall dark stranger in the park where did you seen the stanger what did you see what did you see in the park thomas jefferson is the third president of the United States who is the third president of the United States The Yankees won the 1998 World series WHAT won the 1998 World Series *currently the program does not know that the "Yankees" are people so it is necessary to use "What" for this question. and so on. Of course you could build a variety of story or educational files to talk to Peedy about, but for this early version it is just fun to put in a few sentences and chat with him. This is also available with the Virtual Friend technology at http://www.haptek.com. Our web site is http://www.ergo-ling.com if you have any further interest in our NLP technology. Or... if you have a WIN95 animation of your own we would be happy to show you how to connect ChatterBox to it. I will be showing this in Boston at the SBIR National Conference November 3-5th. I will also be giving a lecture and demonstration of this technology at Northeastern University (Thursday at noon room 415 in the Classroom Building) while I am there. If you have anyone in town at that time or at that conference, ask them to stop by and I will give them a more thorough introduction to the ChatterBox technology and our other NLP tools. Because my company is an SBIR grantee we will have display space in the SBIR section near the main entrance. Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)539-3924 bralich at hawaii.edu http://www.ergo-ling.com Philip A. Bralich, President Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 tel:(808)539-3920 fax:(880)539-3924 From jtang at COGSCI.BERKELEY.EDU Sat Oct 31 19:24:47 1998 From: jtang at COGSCI.BERKELEY.EDU (Joyce Tang Boyland) Date: Sat, 31 Oct 1998 11:24:47 -0800 Subject: storage / computation again Message-ID: This is a bit of a delayed response to Brian (I'm just coming off of grading mid-terms): I hope I did not imply that we should use a model of word learning as the basis for understanding the automatization of syntactic patterns. If anything, it should be the opposite, that we might use the automatization of syntactic patterns as the basis for understanding the formation of new words (like _brokeded_ which does have "slots" so to speak and transitions). Also, building off Martin Haspelmath's posting which muses about criteria to use in deciding whether storage or processing is superior, I think it's worth trying out a stronger claim, namely that neither storage+retrieval nor computation is intrinsically superior. In skill learning, you can start either with storage+retrival of an exemplar (or prototype), or with computation of a sequence. It depends on the details of the situation which route is used. But whichever way you learn to use a new construction, the outcome is not that you have added a new construction (or even a new word) to your piggy bank, but that you have smoothed a set of transitions so that such a sequence is more readily assembled and produced. I agree with Brian that it would be great to have more empirical psycholinguistic research on these topics, and that it is relevant both to acquisition and to historical change. Joyce Tang Boyland >> Date: Wed, 14 Oct 1998 17:34:12 -0600 >> Sender: FUNKNET -- Discussion of issues in Functional Linguistics >> From: Brian MacWhinney >> Subject: rote vs rules >> >> 3. I agree with Joyce that language is a skill. However, the devil is in >> the details. If we fail to recognize the fundamental difference between >> word learning and syntactic automatization, I am worried that we could go >> down some false paths. The routinization of the word is supported by a >> tightly predictive association between audition and articulation. When we >> hear a new auditory form, it appears that we use the phonological loop on >> some level to store it. As we then attempt to match to this form using our >> own articulations, we convert a resonant auditory form to an entrenched >> articulatory form. Work by Baddeley, Gupta, Cowan and others has taught us >> a great deal about the details of this process. Yes, you can use ACT-R to >> model this, but you will be using a restricted subset of ACT-R and the >> process of deciding what restrictive subset is applicable is the whole of >> the scientific process of understanding the neuronal mechanics of word >> learning. >> >> Trying to use a model of word learning as the basis for understanding the >> automatization of syntactic patterns strikes me as quite problematic. The >> central problem is that predicates have open slots for arguments. Words, >> as Wally notes, are largely slot-free (of course there are exception, such >> as infixes etc.). I tend to think of this level of skill automaticity in >> terms of Michael Jordan faking out Karl Malone in the last points of the >> final game of the NBA finals. Jordan clearly has a flexible set of plans >> for dunking the ball into the basket against the opposition of a defender. >> What is automatic in his actions is the move from one state to the next. >> The skill is in the transitions. It strikes me that sentence production is >> like this and that word level articulation is basically not. >> >> --Brian MacWhinney