From darnell at CSD.UWM.EDU Mon Jun 1 13:43:49 1998 From: darnell at CSD.UWM.EDU (Michael Darnell) Date: Mon, 1 Jun 1998 08:43:49 -0500 Subject: UWM conference program Message-ID: THE 24TH UNIVERSITY OF WISCONSIN-MILWAUKEE LINGUISTICS SYMPOSIUM September 10-12, 1998 DISCOURSE ACROSS LANGUAGES AND CULTURES *GENERAL INFORMATION*: - TIME: September 10-12, Thursday, Friday, and Saturday: Thursday, 9:15-5:45 Friday, 8:30-6:00 Saturday, 9:00-6:15 There will be social gatherings both Friday and Saturday night. - PLACE: The meeting will take place in the Union Building on the University of Wisconsin-Milwaukee campus (2200 E. Kenwood Boulevard, Milwaukee, WI). All sessions will be held in the Wisconsin Room. - REGISTRATION: Registration will begin on Thursday at 8:00 a.m. and will continue through Friday (please contact us about Saturday-only registration). - CONFERENCE HANDBOOK: For those who cannot attend, the conference handbook, which includes all the abstracts for presented papers, will be available for purchase by mail. For information on this, or for other questions contact Mike Darnell at darnell at csd.uwm.edu *PROGRAM* *THURSDAY* 9:15-9:30 Welcome 9:30-10:15 Carol MODER (Oklahoma State University) TBA 10:15 -11:00 Robert LONGACRE (U.of Texas at Arlington) TBA 11:00-12:30 Lunch 12:30-1:00 Aneta PAVLENKO(Cornell University) Narrative Construction: Cross-linguistic and Cross-cultural Perspective 1:00-1:30 Mary DIGENNARO-SEIG(Oklahoma State U.) Episodic Boundaries in Japanese and English Narratives 1:30-2:00 Tania GASTAO SALIES(Pontificia Universidade Catolica) Texts as Image-Schemas: The Communicative Text 2:00-2:30 Michael BARLOW (Rice University) Parallel Concordancing and Contrastive Discourse 2:30-2:45 Break 2:45-3:15 Ingrid PILLER (Universitat Hamburg) Topic Control in Bilingual Couples' Conversations 3:15-3:45 Patricia MAYES (U. of California, Santa Barbara) Genre as a Locus of Cultural Ideology: A Comparison of Japanese and American Cooking Classes 3:45-4:15 Paul Kei MATSUDA (Purdue University) Negotiation of Identity in a Japanese On-Line Discourse Community 4:15-4:45 Brad DAVIDSON (Stanford University) The Medical Interpreter as the Site of Cross- cultural Interpretation 4:45-5:00 Break 5:00-5:45 Ronald SCOLLON (Georgetown University) TBA *FRIDAY* 8:30-9:15 Susanna CUMMING (U. of California, Santa Barbara) TBA 9:15-9:30 Break 9:30-10:00 JoAnne NEFF (Universidad Complutense) Contrastive Discourse Analysis: Argumentative Text in English and Spanish 10:00-10:30 Finn FRANDSEN & Winni JOHANSEN (The Aarhus School of Business) When Arguments Turn Green: Discourse, Genre, and Culture in "Green" Marketing Communications in France and Denmark 10:30-11:00 Elizabeth ARCAY HANDS & Ligia COSSE (Universidad de Carabobo) Multidimensional Analysis of Academic Essays Written in Venezuelan Spanish and British and North-American English by Monolingual and Bilingual Scholars 11:00-11:30 James J. MULLOOLY (Columbia University) Reading English in Arabic: Applied Contrastive Rhetoric 11:30 to 1 Lunch 1:00-1:30 Lafi ALHARBI (Kuwait University) Rhetorical Transfer Across Cultures: English into Arabic and Arabic into English 1:30-2:00 Christine GEOFFROY(The University of Technology) The English and the French: Cross-cultural Discourse Analysis in a Business Environment 2:00-2:30 Julia LAVID & Maite TABOADA (Universidad Complutense)Stylistic Differences in Document Design Across Languages in Europe: A Cross- cultural Comparison 2:30-2:45 Break 2:45-3:15 Erica HOFMANN KENCKE(U. of Texas at Austin) Making Understanding Understood: Another Challenge for Non-native Speakers 3:15-3:45 Masako TAMANAHA (U. of California, Los Angeles) Interlanguage Apologies by American learners of Japanese: A Comparison with Native Speakers of Japanese 3:45-4:15 Euen HYUK JUNG (Georgetown University) Apologies from a Cross-cultural Perspective 4:15-4:30 Break 4:30-5:15 Dan SLOBIN (U. of California, Berkeley) TBA 5:15-6:00 William EGGINGTON (Brigham Young University) TBA *SATURDAY* 9:00-9:45 Ruth BERMAN (Tel-Aviv University) TBA 9:45-10:00 Break 10:00-10:30 Hikyoung LEE (University of Pennsylvania) Discourse Marker Use in Native and Non-native English-speaking Korean Americans 10:30-11:00 Janet M. FULLER (Southern Illinois U.) Discourse Markers in Codeswitching and Borrowing: a Bridge between Language and Cultures 11:00-11:30 Suzanne FLEISCHMAN(U. of California, Berkeley) Discourse Markers Across Language? Evidence from French and English 11:30-1:00 Lunch 1:00-1:45 Sonja TIRKONNEN-CONDIT (University of Joensuu) TBA 1:45-2:00 Break 2:00-2:30 John K. HELLERMANN (U. of Wisconsin, Madison) Prosody and Transistion-relevance-spaces in English and Hungarian Conversations 2:30-3:00 Rebecca DAMRON (The University of Tulsa) Prosody in Urdu and Pakistani English Conversational Discourse 3:00-3:30 Laura Hsiu-min LIU (National Taiwan University)A Cross-Linguistic Analysis of Pause Markers in Spoken Chinese and Seediq 3:30-3:45 Break 3:45-4:15 Maite TABOADA (Universidad Complutense) Rhetorical Structure Theory in Dialogue: A Contrastive Analysis 4:15-4:45 Ivo SANCHEZ (U. of California, Santa Barbara) Spontaneous Rhetoric: Lists in English and Spanish Conversation 4:45-5:15 Amy MEEPOE (U. of California, Los Angeles) & Makoto HAYASHI (U. of Colorado, Boulder) Formulating Person Reference in Thai and Japanese: A Cross-linguistic Study of 'Zero' Pronouns in Conversation. 5:15-5:30 Break 5:30-6:15 Wallace CHAFE (U. of California, Santa Barbara) TBA From JMARIN at SR.UNED.ES Mon Jun 1 19:06:42 1998 From: JMARIN at SR.UNED.ES (Juana Marin, UNED, SPAIN) Date: Mon, 1 Jun 1998 14:06:42 -0500 Subject: AELCO-SCOLA E-mail List Message-ID: ************************************************************ SORRY IF YOU RECEIVE THIS MESSAGE MORE THAN ONCE ************************************************************ AELCO-SCOLA E-MAIL LIST The Spanish Cognitive Linguistics Association (AELCO-SCOLA) has recently started an e-mail list called LingCog. It is open to anybody who might be interested in keeping informed about cognitive linguistics in Spain. You may subscribe to LingCog from your own e-mail account by using the following information: To: Majordomo at fil.ub.es Subject: (Anything) Body: subscribe lingcog Thank you, Joseph Hilferty From mackenzi at LET.VU.NL Tue Jun 2 10:26:19 1998 From: mackenzi at LET.VU.NL (J.L. Mackenzie) Date: Tue, 2 Jun 1998 10:26:19 MET Subject: Eighth International Conference on Functional Grammar Message-ID: The Eighth International Conference on Functional Grammar (ICFG8) is being held at the Vrije Universiteit Amsterdam (Netherlands) from July 6th through 9th. For full details, including the conference program and abstracts of all papers, as well as information on travel and accommodation, see: http://www.mis.coventry.ac.uk/FGIS/8thICFG.html To register as a participant in the conference, please contact: icfg8 at let.vu.nl Lachlan Mackenzie Free University Amsterdam From JMARIN at SR.UNED.ES Tue Jun 2 11:34:08 1998 From: JMARIN at SR.UNED.ES (Juana Marin, UNED, SPAIN) Date: Tue, 2 Jun 1998 06:34:08 -0500 Subject: (Fwd) AELCO-SCOLA E-mail List Message-ID: ------- Forwarded Message Follows ------- Date: Thu, 28 May 1998 14:44:11 -0700 (PDT) From: Joseph Hilferty To: cogling at ucsd.edu Subject: AELCO-SCOLA E-mail List Reply-to: Joseph Hilferty ************************************************************ SORRY IF YOU RECEIVE THIS MESSAGE MORE THAN ONCE ************************************************************ AELCO-SCOLA E-MAIL LIST The Spanish Cognitive Linguistics Association (AELCO-SCOLA) has recently started an e-mail list called LingCog. It is open to anybody who might be interested in keeping informed about cognitive linguistics in Spain. You may subscribe to LingCog from your own e-mail account by using the following information: To: Majordomo at fil.ub.es Subject: (Anything) Body: subscribe lingcog Thank you, Joseph Hilferty From clements at INDIANA.EDU Tue Jun 2 19:32:14 1998 From: clements at INDIANA.EDU (J. Clancy Clements (Kapil)) Date: Tue, 2 Jun 1998 14:32:14 -0500 Subject: studies on the acquisition of possessive pronouns/adjectives (fwd) Message-ID: I'm looking for studies on the acquisition of possessive pronouns/adjectives in any language (L1 or L2). If there's interest, I'd be happy to make a list of the references I receive for Funknet. Clancy Clements From bralich at HAWAII.EDU Fri Jun 5 01:18:43 1998 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Thu, 4 Jun 1998 15:18:43 -1000 Subject: 3-D/NLP Demo Message-ID: To help demonstrate the possibilities and the sheer fun of NLP with 3-D animations we have just produced a demo product using characters from Haptek Technologies. The software package allows you to chat with an alien. The main point is to input statements you want to query such as You saw the tall dark stranger in the park. The tall dark stranger was carrying a knife. Then you can ask things like, "What was the stranger doing?" "Where did you see the stranger?" And then get the answer straight from the alien's lips. The graphics and speech generation technology from Haptek are very nice and make this a very pleasurable intro to the future marriage of edutainment and NLP. Using this format of course you could input an entire murder mystery. The guys who write scripts for muds and moos could probably get a hundred stories for this one character alone. It is available from the download section of our web site at http://www.ergo-ling.com. Great Fun! Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)539-3924 Philip A. Bralich, President Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 tel:(808)539-3920 fax:(880)539-3924 From meira at RUF.RICE.EDU Sat Jun 6 04:07:32 1998 From: meira at RUF.RICE.EDU (Sergio Meira S.C.O.) Date: Fri, 5 Jun 1998 23:07:32 -0500 Subject: studies on the acquisition of possessive pronouns/adjectives (fwd) In-Reply-To: Message-ID: Dear Funknetters, I was recently told that 'historical-comparative linguistics tends to reconstruct past order from present disorder', so that we always get a nicer, more orderly picture of grammar in the past than in the present. Would anyone happen to know of any references on the topic of whether more disordered states can be reconstructed (e.g. unpredictable alternations, irregularities in form/distribution of morphemes, semantic unpreditability etc.), especially when the present-date state is more orderly? Sergio Meira meira at ruf.rice.edu From Ziv at HUM.HUJI.AC.IL Sun Jun 7 01:05:00 1998 From: Ziv at HUM.HUJI.AC.IL (Ziv Yael) Date: Sat, 6 Jun 1998 18:05:00 -0700 Subject: Call for Papers+please publicise Message-ID: ------------------------------------------------------------------------------ FORWARDED FROM: Ziv Yael Return-Path: Date: Fri, 5 Jun 1998 01:06:00 +0300 (IDT) Message-Id: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" To: dascal at spinoza.tau.ac.il, elda at bgumail.bgu.ac.il, hbzs22 at post.tau.ac.il, jonathan at research.haifa.ac.il, manor at ccsg.tau.ac.il, mariel at ccsg.tau.ac.il, mskcusb at pluto.mscc.huji.ac.il, sarfati at ccsg.tau.ac.il, shir at bgumail.bgu.ac.il, tamark at construct.haifa.ac.il, ziv at hum.huji.ac.il From: anatbi at post.tau.ac.il (Anat Biletzki) Subject: Call for Papers Call for Papers PRAGMA99 International Pragmatics Conference on PRAGMATICS AND NEGOTIATION June 13-16, 1999 Tel Aviv University and Hebrew University of Jerusalem Tel Aviv and Jerusalem Israel The main theme of this conference is the pragmatics of negotiation, interpreted in a very broad sense. Interlocutors engage in negotiations about every aspect of their interaction - such as floor access and topic selection, contextual assumptions, conversational goals, and the (mis)interpretation and repair of their messages. Topics such as cross-cultural and cross-gender (mis)communications, conversational procedures in disputes and collaborations, argumentation practices, and effects of assumptions and goals on the negotiating strategies of interlocutors are of special interest for this conference. The conference will be interdisciplinary, bringing together pragmaticists, linguists, philosophers, anthropologists, sociologists and political scientists. We are soliciting papers on all issues relevant to the theme of the conference, as well as papers in other areas of pragmatics and dialogue analysis. The conference will include plenary addresses, regular session lectures, and organized panels around any of the relevant topics. Among the plenary speakers: Elinor Ochs (UCLA), Itamar Rabinovitch (Tel Aviv University), Emanual Schegloff (UCLA), Thomas Schelling (University of Maryland), Deborah Schiffrin (Georgetown University), Deborah Tannen (Georgetown University), Ruth Wodak (University of Vienna). Presentation of regular session lectures is 30 minutes long, with a subsequent discussion of 10 minutes. Panels take the form of a series of closely related lectures on a specific topic, which may or may not be directly related to the special topic of the conference. They may consist of one, two or three units of 120 minutes. Within each panel unit a maximum of four 20-minute presentations are given consecutively, followed by a minimum of 30 minutes of discussion (either devoted entirely to an open discussion, or taken up in part by comments by a discussant or discussants). Panels are composed of contributions attracted by panel organizers, combined with individually submitted papers when judged appropriate by the Program Committee in consultation with the panel organizers. Typically, written versions or extensive outlines of all panel contributions should be available before the conference to facilitate discussion. SUBMISSIONS Abstracts for papers and panels should be submitted in the following format: 1. For papers - five copies of an anonymous abstract (up to 300 words). 2. For panels - a preliminary proposal of one page, detailing title, area of interest, name of organizer(s) and invited participants to be sent by August 1, 1998. Organizers of approved panels will then be invited to submit a full set of abstracts, including: a. a brief description of the topic area, b. a list of participants (with full details, see below), c. abstracts by each of the participants by November 1, 1998. 3. In all cases, a page stating: a. title, b. audiovisual/computer request, and c. for each author: I. Full name and affiliation; II. Current address; III. E-mail address; IV. Fax number. Deadline for submission of abstracts: Nov. 1, 1998. Abstracts may be sent by hard copy, disk, or e-mail to Pragma99, Faculty of Humanities, Tel Aviv University, Tel Aviv 69978, ISRAEL. E-mail: pragma99 at post.tau.ac.il Date of notification: March 1, 1999. PROGRAM COMMITTEE: Mira Ariel, Hava Bat-Zeev Shyldkrot, Jonathan Berg, Anat Biletzki, Shoshana Blum-Kulka, Marcelo Dascal, Nomi Erteschik-Shir, Tamar Katriel, Ruth Manor, George-Elia Sarfati, Elda Weizman, Yael Ziv. ============================================================ PRAGMA99 REGISTRATION FORM Please send the following information, accompanied by cheque payable to Tel-Aviv University in the amount of US$75 if paid before November 1, 1998, otherwise US$100, to Pragma99 Faculty of Humanities Tel Aviv University Tel Aviv 69978, ISRAEL Dr./Mr./Mrs./Ms./ Name:__________________________ Address:_______________________________________________ University/Organization:___________________________________ Email:__________________________ Fax:____________________(Home)_______________(Office) Telephone:____________________(Home)_____________(Office) Signature:_____________________ Date:________________ Those wishing to pay by credit card should provide the following information: Type of Credit Card: Mastercard/Visa/American Express Name as it appears on Credit Card: Sum of Paymnt: US$__________ Card No.________________________ Expiration Date: __________________ Date:_______________ Signature: _____________________ ********** Those wishing to present a paper should follow the instructions above. Hotel information will be provided after registration. The International Association for Dialogue Analysis is co-sponsoring a part of our conference, which will be devoted to "Negotiation as a Dialogic Concept." For further information, contact Edda Weigand (e-mail: weigand at uni-muenster.de). ============================================================ [Forms can also be returned by fax to 972-3-6407839, or by e-mail to pragma99 at post.tau.ac.il . ] From spikeg at OWLNET.RICE.EDU Fri Jun 12 11:24:57 1998 From: spikeg at OWLNET.RICE.EDU (Spike L Gildea) Date: Fri, 12 Jun 1998 06:24:57 -0500 Subject: Symposium on Ideophones (fwd) Message-ID: ---------- Forwarded message ---------- From: "Dr. phil. Christa Kilian-Hatz, M.A." Dear Colleagues, the Institute fo Africa Linguistics, Un iversity of Cologne, is organizing a SYMPOSIUM on IDEOPHONES here in Cologne on January 24-28, 1999. While the central theme of the symposum is ideophones in African languages, we are hoping to extend our investigation well beyond the geographical confines of that continent. Anyone interest in the topic is encouraged to contact: F.K. Erhard Voeltz or Christa Kilian-Hatz at this e-mail address/or Institut fuer Afrikanistik Universitaet zu Koeln D-50923 Koeln Cologne/Germany From BFORD at BLACKWELLPUBLISHERS.CO.UK Tue Jun 16 09:30:14 1998 From: BFORD at BLACKWELLPUBLISHERS.CO.UK (Ford Beck) Date: Tue, 16 Jun 1998 10:30:14 +0100 Subject: SYNTAX - a journal of theoretical, experimental and interdiscipli nary research Message-ID: > I am delighted to announce the launch of SYNTAX, a new international > peer-reviewed journal focusing on all areas of syntactic research. > > SYNTAX aims to unite related but often disjointedly represented areas > of syntactic inquiry together in one publication. Within a single > forum SYNTAX will accommodate both the explosive growth and increased > specialization in the field of syntax. > > Free Sample Copy Available > To order your free sample copy, please reply to > egilling at blackwellpublishers.co.uk with 'SYNTAX-SAMPLE COPY REQUEST 7' > in the subject line and your full name and postal address in the > message. > > Special Offer - Electronic Access is included in your institutional > subscription to the print edition. > > Editors: > Samuel D. Epstein, University of Michigan, USA > Suzanne Flynn, Massachusetts Institute of Technology, USA > > Topics covered by SYNTAX include: > * Syntactic theory > * Syntactic interface with morphology, phonology and semantics > * First language acquisition > * Second language acquisition > * Bilingualism > * Learnability theory > * Computational linguistics > * Neurolinguistics > * Philosophy of mind > * Pragmatics, discourse models > * Parsing > * Parsing-syntactic interface > > Articles in the First Volume include: > > * V-Positions in West Flemish, Liliane Haegeman > * Movement and Chains, Norbert Hornstein > * Logical Problem of Language Change, Parthi Niyogi & Robert Berwick > * The Typology of WH-Movement: WH Questions in Malay, Peter Cole & > Gaby Hermon > > Other 1998 Contributors: *Noam Chomsky *Janet Fodor *Mark Hale > *Richard Kayne *Barbara Lust *Reiko Mazuka *James McCloskey & Sandy > Chung *Jean-Yves Pollock *Edward Stabler *Hoskuldur Thrainsson & > Jonathan Bobaljik *Esther Torrego > > For further information on SYNTAX, visit: > http://www.blackwellpublishers.co.uk/asp/journal.asp?ref=13680005 > > ISSN: 1368-0005, Published in April, August and December, Volume 1, > 1998 > > Institutional Subscription Rates: £79.00 (UK/Rest of World), $125.00 > (N America) > Personal Subscription Rates: £29.00 (UK/Rest of World), $45.00 (N > America) > > Best wishes, > > Emily Gillingham > Senior Marketing Controller > Blackwell Publishers Ltd > 108 Cowley Road > Oxford, OX4 1JF > UK > > Tel: +44 (0) 1865 382265 > Fax: +44 (0) 1865 381265 > Email: egilling at blackwellpublishers.co.uk > http://www.blackwellpublishers.co.uk > > From spikeg at OWLNET.RICE.EDU Wed Jun 17 11:24:16 1998 From: spikeg at OWLNET.RICE.EDU (Spike L Gildea) Date: Wed, 17 Jun 1998 06:24:16 -0500 Subject: LSA Bulletin available on the web (fwd) Message-ID: Date: Tue, 16 Jun 1998 15:04:31 +0100 From: LSA The June 1998 LSA Bulletin (No. 160) is now available at the Society's website (http://www.lsadc.org). From shelli at BABEL.LING.NWU.EDU Wed Jun 17 20:14:19 1998 From: shelli at BABEL.LING.NWU.EDU (Michele Feist) Date: Wed, 17 Jun 1998 15:14:19 -0500 Subject: animacy and spatial terms Message-ID: Greetings, I'm a graduate student at Northwestern University working on the semantics of spatial prepositions. I've started to investigate the kinds of factors about a scene that influence the use of these terms, and I've become interested in the effect of animacy. My question for the list: Have you (or anyone you know of) investigated the effect of either the animacy of the object located (Talmy's Figure) or or the animacy of the reference object (Talmy's Ground) on the use of spatial relational terms? Thanks in advance for any help, Michele Michele Feist Department of Linguistics Northwestern University 2016 Sheridan Road Evanston, IL 60208 m-feist at nwu.edu From lakoff at COGSCI.BERKELEY.EDU Thu Jun 18 04:40:21 1998 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Wed, 17 Jun 1998 21:40:21 -0700 Subject: animacy and spatial terms Message-ID: There's a huge literature on spatial terms in cognitive linguistics. What kinds of examples were you thinking of? George >Greetings, > >I'm a graduate student at Northwestern University working on the semantics >of spatial prepositions. I've started to investigate the kinds of factors >about a scene that influence the use of these terms, and I've become >interested in the effect of animacy. > >My question for the list: Have you (or anyone you know of) investigated >the effect of either the animacy of the object located (Talmy's Figure) or >or the animacy of the reference object (Talmy's Ground) on the use of >spatial relational terms? > >Thanks in advance for any help, > >Michele > >Michele Feist >Department of Linguistics >Northwestern University >2016 Sheridan Road >Evanston, IL 60208 > >m-feist at nwu.edu From delancey at OREGON.UOREGON.EDU Thu Jun 18 15:43:15 1998 From: delancey at OREGON.UOREGON.EDU (Scott Delancey) Date: Thu, 18 Jun 1998 08:43:15 -0700 Subject: animacy and spatial terms Message-ID: > My question for the list: Have you (or anyone you know of) investigated > the effect of either the animacy of the object located (Talmy's Figure) > or the animacy of the reference object (Talmy's Ground) on the use of > spatial relational terms? Not sure exactly what you're looking for here. There are languages (the ones I can think of offhand are Tibeto-Burman, but there are others) where you can't use the same locative construction with a human as with an inanimate Ground. (I'm not sure offhand what happens with non-human animates). I.e. you can say 'The child ran to the door' with a simple locative construction, but to say 'The child ran to his father' you need a relator noun construction, you can't just put the locative marker directly on 'father'. Is this any use to you? Scott DeLancey Department of Linguistics University of Oregon Eugene, OR 97403, USA delancey at darkwing.uoregon.edu http://www.uoregon.edu/~delancey/prohp.html From lakoff at COGSCI.BERKELEY.EDU Mon Jun 22 08:05:27 1998 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Mon, 22 Jun 1998 01:05:27 -0700 Subject: Invariance Principle Message-ID: >Iraide Ibarretxe wrote: >> >> Estoy trabajando con el Invariance Principle de Lakoff, y estaria muy >> agradecida si me pudierais facilitar bibliografia que trate sobre el mismo. > >Se me ocurre: > >Brugman, Claudia. 1990. What is the Invariance Hypothesis? > _Cognitive Linguistics_ 1(2): 257-266. > >Lakoff, George. 1990. The Invariance Hypothesis: Is Abstract > Reasoning Based Image-Schemas? _Cognitive Linguistics_ > 1(1): 39-74. > >_______. 1993. The Contemporary Theory of Metaphor. In > _Metaphor and Thought_, Andrew Ortony (ed.), 202-251. > Cambridge:Cambridge University Press. > >Turner, Mark. 1990 Aspects of the Invariance Hypothesis. > _Cognitive Linguistics_ 1(2): 247-255. > >_______. 1990. Poetry: Metaphor and the Conceptual Context > of Invention. _Poetics Today_ 11(3): 463-482. > >_______. 1991. _Reading Minds: The Study of English in the Age > of Cognitive Science_. Princeton, NJ: Princeton University > Press. > >________. 1992. Language is a Virus. _Poetics Today_ 13(4): > 725-736. > >________. 1996. _The Literary Mind_. Oxford: Oxford University > Press. > >Joe Hilferty >__________________________________________________________ >Home page: http://lingua.fil.ub.es/~hilferty/homepage.html Hi, I recommend taking a look at Srini Narayanan's dissertation, which can be found at the Neural Theory Of Language website, www.icsi.berkeley.edu/NTL. The course website that Jerry Feldman and I used for teaching an introductory course on NTL is www.icsi.berkeley.edu/mbrodsky/Cogsci110. In NTL, the Invariance Principle is unnecessary. Its effects arise automatically from the Neural Theory applied to metaphor. In Narayanan' s model, metaphorical mappings are neural connections allowing source domain inferences (acitivations in his model) to activate target domain structure. The two parts of the Invariance Principle follow automatically. Inferential structure is "preserved" since metaphorical inferences are done in the source domain. And what about apparent "overrides", where inherent target structure takes precedence when there is a possibility of contradiction? In the neural theory, contradiction is neural inhibition. Activations resulting from source domain inferenfces that are neurally inhibited by target structures simply never occur because they are neurally impossible. No extra "principle" is necessary. What makes the neural theory interesting is that it really uses embodiment. The basic claims is that abstract reason IS sensory-motor inference-done in the sensory-motor centers-with resulting activations projected to other parts of the brain by neural connections. Narayanan demonstrates in his dissertation that aspectual reasoning-reasoning about event structure-has the same inferential structure as motor control. This is about as dramatic a confirmation of the theory as neural modeling studies allow. Johnson and I will be discussing Narayanan's research in our book Philosophy In The Flesh, which will appear from Basic Books in early November. By the way, the original work leading up to the Invariance Principle was in Chapter 4 of More Than Cool Reason (Lakoff and Turner), 1989. Best wishes, George Lakoff From lakoff at COGSCI.BERKELEY.EDU Mon Jun 22 08:36:15 1998 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Mon, 22 Jun 1998 01:36:15 -0700 Subject: animacy and spatial terms Message-ID: >On Wed, 17 Jun 1998, George Lakoff wrote: > >> There's a huge literature on spatial terms in cognitive linguistics. What >> kinds of examples were you thinking of? >> >> George > >I was wondering whether anyone had looked at possible effects on the use >of any spatial term due to either exchanging an inanimate Ground for an >animate Ground, or an inanimate Figure for an animate Figure, in a scene >that is otherwise the same. For example, if I had a picture of a hand >holding a coin, and exchanged a dish for the hand (at the same curvature), >without making any other changes, would that influence the use of "in"? > >More generally, I'd like to know if you know of any work that examines the >animacy of either the Figure or the Ground as a factor that speakers use >to assign a spatial term to a scene, including references about languages >that don't allow the same term to be used with both animate and inanimate >Figures/Grounds (Scott DeLancey mentioned Tibeto-Burman as one that >doesn't allow the same term to be used for human and inanimate Grounds). > >Thanks for any help, > >Michele > >Michele Feist >Department of Linguistics >Northwestern University >2016 Sheridan Road >Evanston, IL 60208 > >m-feist at nwu.edu Dear Michelle, Is this the kind of thing you have in mind: In English, you can say "I went to the President" but not "*I'm at the President." Compare "I went to the White House" and "I'm at the White House." Suppose Harry is lying on the ground. You can say My jacket is across Harry if it is on top of him stretched across him, but not if it is on the other side of him. Compare with My glass is across the table *My glass is across Harry The latter is out even if Harry is stretched out on the floor and my glass is on the other side of him. By the way, Postal's old live/nonlive distinction does not occur in this case: *My glass is across the corpse. is no good, even if the corse is spread out in front of you on the floor and your glass is on the other side. English is a good language to look at for such phenomena. Incidentally, metaphor matters here. "He's always at his mother" works only in the metaphorical sense of "at." Compare with "He's always at his mother's" and "He always goes to his mother." Other interesting phenomena: I came across Harry. cannot be used if you ran into him on the street. It works fine if Harry is treated as an object: I came across Harry unconscious in a dumpster. Here Postal's live/nonlive distinction does matter. If you came across Harry's dead body, you can't describe it as I came across Harry in the morgue. English is fun. Enjoy! George From SAMG at PUCC.PRINCETON.EDU Mon Jun 22 12:05:24 1998 From: SAMG at PUCC.PRINCETON.EDU (Sam Glucksberg) Date: Mon, 22 Jun 1998 08:05:24 EDT Subject: No subject Message-ID: suspend From shelli at BABEL.LING.NWU.EDU Mon Jun 22 19:54:47 1998 From: shelli at BABEL.LING.NWU.EDU (Michele Feist) Date: Mon, 22 Jun 1998 14:54:47 -0500 Subject: animacy and spatial terms In-Reply-To: Message-ID: Dear George, These are certainly the kinds of examples I'm interested in; thanks for sending them on! I'm not familiar with Postal's live/non-live distinction; could you send me a reference so I could read more about it? Also, do you know of any references for work that's looked at the kinds of contrasts you mentioned? Thanks again for all your help! Michele On Mon, 22 Jun 1998, George Lakoff wrote: > Dear Michelle, > > Is this the kind of thing you have in mind: > > In English, you can say "I went to the President" but not "*I'm at the > President." > Compare "I went to the White House" and "I'm at the White House." > > Suppose Harry is lying on the ground. You can say > My jacket is across Harry > if it is on top of him stretched across him, but not if it is on the other > side of him. Compare with > My glass is across the table > *My glass is across Harry > The latter is out even if Harry is stretched out on the floor and my glass > is on the other side of him. > By the way, Postal's old live/nonlive distinction does not occur in > this case: > *My glass is across the corpse. > is no good, even if the corse is spread out in front of you on the floor > and your glass is on the other side. > > English is a good language to look at for such phenomena. > > Incidentally, metaphor matters here. "He's always at his mother" works only > in the metaphorical sense of "at." > Compare with "He's always at his mother's" and "He always goes to his mother." > > Other interesting phenomena: > I came across Harry. > cannot be used if you ran into him on the street. It works fine if Harry is > treated as an object: > I came across Harry unconscious in a dumpster. > Here Postal's live/nonlive distinction does matter. If you came across > Harry's dead body, you can't describe it as > I came across Harry in the morgue. > > English is fun. > > > Enjoy! > > George > > > Michele Feist Department of Linguistics Northwestern University 2016 Sheridan Road Evanston, IL 60208 m-feist at nwu.edu From eitkonen at UTU.FI Tue Jun 23 21:56:34 1998 From: eitkonen at UTU.FI (Esa) Date: Tue, 23 Jun 1998 12:56:34 -0900 Subject: 'totally novel sentence' Message-ID: Dear colleagues I am about to finish a paper on which I have been working some time and, just to make sure that I have got everything right, I have to ask you the following question. In the not-so-distant past it was widely claimed that speaker-hearers constantly encounter and understand (utterances of) 'totally novel sentences'. Did anyone of you understand what was meant by this curious statement? I certainly did not. If I know the language in question, every sentence that I hear has some obvious similarities (or analogies) to sentences that I have heard before. I never hear sentences exemplifying totally novel sentence structures or containing totally novel grammatical morphemes, and I seldom hear sentences containing totally novel lexical units. So, to repeat, did anyone of you ever understand what was meant by this often-repeated slogan? Esa Itkonen From john at RESEARCH.HAIFA.AC.IL Tue Jun 23 11:05:41 1998 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Tue, 23 Jun 1998 14:05:41 +0300 Subject: novel sentences Message-ID: I think what they meant was sentences which were not 100% identical to sentences which had been used before (or perhaps which the speaker had heard before). For example, if I write `The purple and green fox gave 157 turnips to the piebald cow', this is a sentence which in all likelihood has never been used before in the history of the world, so it is on this understanding a `totally novel sentence,' even if whatever words and structures are used are analogous to similar structures in common sentences. I'm pretty sure that this was the idea. I think they were arguing against an extremely naive position (probably a strawman) which might be attributed to super-behaviorism saying something like people only reproduce sentences which are exact copies of sentences which they've already heard. That's the only sense I could make of such statements-- the whole discussion seems silly and pointless and the kind of argument you would only need to make to an 8-year-old. John Myhill From delancey at OREGON.UOREGON.EDU Tue Jun 23 15:26:57 1998 From: delancey at OREGON.UOREGON.EDU (Scott Delancey) Date: Tue, 23 Jun 1998 08:26:57 -0700 Subject: 'totally novel sentence' In-Reply-To: Message-ID: I always assumed (and, I think, was probably explicitly taught somewhere along the line) that all it meant was a sentence which is not EXACTLY identical to any sentence that you've heard before. (Even that, of course, is not as true as a lot of people like to assume). Scott DeLancey From dryer at ACSU.BUFFALO.EDU Tue Jun 23 17:38:40 1998 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Tue, 23 Jun 1998 13:38:40 -0400 Subject: novel sentences In-Reply-To: Message-ID: With respect to John Myhill's "the whole discussion seems silly and pointless and the kind of argument you would only need to make to an 8-year-old", it is my experience that most lay people, such as undergraduates in intro linguistics classes, are sufficiently naive about language that nearly nothing is obvious to them and that this kind of observation,with elaboration by example, is in fact quite instructive. The fact that so many of the sentences we hear are novel does seem to me an important and fundamental property of language. Furthermore, it represents a fundamental difference between sentences and words (at least for most languages). From that perspective, I see this as hardly "silly and pointless". It is true that this property of language was used in arguments against behaviourism - and not just a strawman position, but versions of behaviourism that were once dominant in psychology - but I would have thought that this was something sufficiently basic to be something that is common ground for nearly all linguists, formalist, functionalist, cognitivist or whatever. I suspect, as Scott Delancey suggests, that the novelty of sentences one hears is probably exaggerated, but the basic point still holds. Matthew Dryer From slobin at COGSCI.BERKELEY.EDU Tue Jun 23 18:11:16 1998 From: slobin at COGSCI.BERKELEY.EDU (Dan I. SLOBIN) Date: Tue, 23 Jun 1998 11:11:16 -0700 Subject: 'totally novel sentence' In-Reply-To: Message-ID: This was never a puzzle to me; rather, I found it a great insight when I first encountered it as an undergraduate in the late fifties, and have been passing it on to my students ever since. Simply put: you almost never enocunter the same sentence twice--i.e., syntactic construction plus lexical items. (What is novel is each unique combination of words and morphosyntactic patterns.) The consequence is that, although you can learn your vocabulary by rote, you can't learn your sentences by rote. Ergo, language acquisition must be conceived of as a "generative" or "constructivist" accomplishment. The achievements of recent decades of linguistics do not dim this insight, but merely make it more important: to be sure, the child learns syntactic patterns, constructions, rich lexical entries, and so forth--but each actual utterance, produced or received, calls upon general processing skills (to use another old term, "competence"). -Dan Slobin Psychology, UC Berkeley On Tue, 23 Jun 1998, Esa wrote: > Dear colleagues > I am about to finish a paper on which I have been working some > time and, just to make sure that I have got everything right, I have to > ask you the following question. In the not-so-distant past it was widely > claimed that speaker-hearers constantly encounter and understand > (utterances of) 'totally novel sentences'. Did anyone of you understand > what was meant by this curious statement? I certainly did not. If I know > the language in question, every sentence that I hear has some obvious > similarities (or analogies) to sentences that I have heard before. I > never hear sentences exemplifying totally novel sentence structures or > containing totally novel grammatical morphemes, and I seldom hear > sentences containing totally novel lexical units. So, to repeat, did > anyone of you ever understand what was meant by this often-repeated slogan? > > Esa Itkonen > From jrubba at POLYMAIL.CPUNIX.CALPOLY.EDU Tue Jun 23 20:09:34 1998 From: jrubba at POLYMAIL.CPUNIX.CALPOLY.EDU (Johanna Rubba) Date: Tue, 23 Jun 1998 13:09:34 -0700 Subject: 'totally novel sentence' Message-ID: I always understood 'totally novel sentence' to mean a totally new _combination_ of previously existing units -- the fundamental insight behind this being that grammr provides us with underspecified patterns into which existing forms can be plugged, enabling the creativity which is one element distinguishing human language from animal communication systems. That this creativity is theoretically infinite was, I have been trained to believe, an important insight of generative linguistics. I find this nontrivial. But this great insight does have a downside -- I believe it helped drive a wedge between semantics and syntax by giving the impression that the patterns were the core of the grammar, and you could plug any old thing into them and get a 'grammatical' sentence, such as the turnip doozie contributed by John Myhill. 'Grammatical, but semantically deviant' has been with us ever since. Why exclude the potential for making sense of a particular concatenation of lexical items from the purview of 'grammaticality'? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Johanna Rubba Assistant Professor, Linguistics ~ English Department, California Polytechnic State University ~ San Luis Obispo, CA 93407 ~ Tel. (805)-756-2184 Fax: (805)-756-6374 ~ E-mail: jrubba at polymail.calpoly.edu ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From amnfn at WELL.COM Tue Jun 23 20:58:36 1998 From: amnfn at WELL.COM (A. Katz) Date: Tue, 23 Jun 1998 13:58:36 -0700 Subject: No subject Message-ID: Dan Slobin observed as follows: >the child learns syntactic patterns, constructions, rich >lexical entries, and so forth--but each actual utterance, produced or >received, calls upon general processing skills (to use another old term, >"competence"). I think functionalists and generativists agree that general processing skills are required in order to produce or comprehend language. (The big question is whether any of the actual rules are pre-wired, or they are learned by exposure using generalized cognitive mechanisms of pattern recognition that apply to many other acitivities besides language processing.) What's rather interesting is that the observation that hitherto unuttered sentences are comprehensible and have an agreed meaning is true of not just natural language. It works for computer languages as well. You can write a new program in any computer language using a combination of commands that was never before juxtaposed in quite that way, and provided you have not made a syntax error, the program will run and do exactly what you told it to do. (Which may or may not be what you intended.) Likewise, the `objective' meaning of an utterance in a given speech community can be demonstrated by the phenomenon of hearers consistently interpreting a statement one way when the speaker intended it to mean something else. "That may be what you meant," people have been known to pronounce, "but it's certainly not what you said." The cognitive mechanism behind comprehension -- whether it be generative or not -- is not implicated by the fact of relatively original utterances having predetermined meanings. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From ellen at CENTRAL.CIS.UPENN.EDU Tue Jun 23 21:53:38 1998 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Tue, 23 Jun 1998 17:53:38 EDT Subject: No subject In-Reply-To: Your message of "Tue, 23 Jun 1998 13:58:36 PDT." <199806232058.NAA26691@well.com> Message-ID: "A. Katz" wrote: >I think functionalists and generativists agree that general processing skills >are required in order to produce or comprehend language. (The big question is >whether any of the actual rules are pre-wired, or they are learned by exposure >using generalized cognitive mechanisms of pattern recognition that apply to >many other acitivities besides language processing.) > >What's rather interesting is that the observation that hitherto unuttered >sentences are comprehensible and have an agreed meaning is true of not just >natural language. It works for computer languages as well. > >You can write a new program in any computer language using a combination of >commands that was never before juxtaposed in quite that way, and provided you >have not made a syntax error, the program will run and do exactly what you >told it to do. (Which may or may not be what you intended.) > >Likewise, the `objective' meaning of an utterance in a given speech community >can be demonstrated by the phenomenon of hearers consistently interpreting a >statement one way when the speaker intended it to mean something else. "That >may be what you meant," people have been known to pronounce, "but it's >certainly not what you said." > >The cognitive mechanism behind comprehension -- whether it be generative or >not -- is not implicated by the fact of relatively original utterances having >predetermined meanings. But what makes the notion of novel sentences interesting for natural language is precisely the issue of acquisition, which you alluded to in your first paragraph but dropped. The issue of acquisition of computer languages is rather different... From Jon.Aske at SALEM.MASS.EDU Tue Jun 23 23:23:36 1998 From: Jon.Aske at SALEM.MASS.EDU (Jon Aske) Date: Tue, 23 Jun 1998 19:23:36 -0400 Subject: 'totally novel sentence' In-Reply-To: Message-ID: I think nobody would disagree with the claim that we all learned in Linguistics 101 that the number of possible sentences in a language is infinite. On the other hand, there is no doubt that a lot of the sentences that are uttered by speakers are not novel and that collocations of all types abound, as Fillmore and many others have rightly emphasized. I wish somebody would finally listen to Bolinger's suggestion (see below) and actually work out some plausible estimates for the degree of novelty that is actually found in the speech, as opposed to the theoretical upper limit. Now, that would be an interesting psycholinguistic finding, I would think. (Perhaps someone has already speculated about this or done actual empirical research on this that I am not aware of). Anyway, here is the quote that I am so fond of: "The distinctive trait of generative grammar is its aim to be an ACTIVE portrait of grammatical processes. It departs from traditional grammar, which consists chiefly in the MAPPING of constructions. How much actual invention, on this model, really occurs in speech we shall know only when we have the means to discover how much originality there is in utterance. At present we have no way of telling the extent to which a sentence like I went home is the result of invention, and the extent to which it is a result of repetition, countless speakers before us having already said it and transmitted it to us in toto. Is grammar something where speakers 'produce' (i.e. originate) constructions, or where they 'reach for' them, from a preestablished inventory, when the occasion presents itself? If the latter, then the MATCHING technique of traditional grammar is the better picture--from this point of view, constructions are not produced one from another or from a stock of abstract components, but filed side by side, and their interrelationships are not derivative but mnemonic--it is easier to reach for something that has been stored in an appropriate place. Probably grammar is both of these things, but meanwhile the transformationist cannot afford to slight the spectrum of utterances which are first of all the raw material of his generalization s and last of all the test of their accuracy." (Bolinger 1961:381, Syntactic blends and other matters, A sort of review of R.B. Lees's Multiply Ambiguous Adjectival Constructions in English.) ------------------------------------------------------------- Jon Aske -- Jon.Aske at salem.mass.edu -- aske at earthlink.net Department of Foreign Languages, Salem State College Salem, Massachusetts 01970 - http://home.earthlink.net/~aske/ ------------------------------------------------------------- Too many pieces of music finish too long after the end. --Igor Stravinsky (1882-1971). From Jon.Aske at SALEM.MASS.EDU Tue Jun 23 23:52:10 1998 From: Jon.Aske at SALEM.MASS.EDU (Jon Aske) Date: Tue, 23 Jun 1998 19:52:10 -0400 Subject: 'totally novel sentence' In-Reply-To: Message-ID: After I sent the previous message with Bolinger's quote, I found another beautiful quote from his absolutely wonderful article "Meaning and Memory" (1979), which makes a similar point (cf. the second paragraph). I just can't resist the temptation to forward it to you. I am sorry if you don't find it as inspiring as I do :-) (By the way, "Syntactic blends and other matters" was in Language 37:3:366-381) "For a long time now linguists have been reveling in Theory with a capital T. If you assume that language is a system où tout se tient--where everything hangs together--then it follows that a connecting principle is at work, and the linguist's job is to construct a one-piece model to account for everything. It can be a piece with many parts and subparts, but everything has to mesh. That has been the overriding aim for the past fifteen years. But more and more evidence is turning up that this view of language cannot be maintained without excluding altogether too much of what language is supposed to be about. In place of a monolithic homogeneity, we are finding homogeneity within heterogeneity. Language may be an edifice where everything hangs together, but it has more patching and gluing about it than architectonics. Not every monad carries a microcosm of the universe inside; a brick can crumble here and a termite can nibble there without setting off tremors from cellar to attic. I want to suggest that language is a structure, but in some ways a jerrybuilt structure. That it can be described not just as homogeneous and tightly organized, but in certain of its aspects as heterogeneous but tightly organized. Specifically what I want to challenge is the prevailing reductionism--the analysis of syntax and phonology into determinate rules, of words into determinate morphemes, and of meanings into determinate features. I want to take an idiomatic rather than an analytic view and argue that analyzability always goes along with it opposite at whatever level, and that our language does not expect us to build everything starting with lumber, nails, and blueprint. Instead it provides us with an incredibly large number of prefabs, which have the magical property of persisting even when we knock some of them apart and put them together in unpredictable ways." (Bolinger 1979:95-96, Meaning and memory, in Haydu, George G., ed., Experience forms: Their cultural and individual place and function, World anthropology, The Hague: Mouton) ------------------------------------------------------------- Jon Aske -- Jon.Aske at salem.mass.edu -- aske at earthlink.net Department of Foreign Languages, Salem State College Salem, Massachusetts 01970 - http://home.earthlink.net/~aske/ ------------------------------------------------------------- If this is coffee, please bring some tea; but it this is tea, please bring me some coffee. --Abraham Lincoln. From amnfn at WELL.COM Wed Jun 24 02:44:14 1998 From: amnfn at WELL.COM (A. Katz) Date: Tue, 23 Jun 1998 19:44:14 -0700 Subject: No subject Message-ID: "Ellen F. Prince" wrote: >But what makes the notion of novel sentences interesting for natural >language is precisely the issue of acquisition, which you alluded to >in your first paragraph but dropped. The issue of acquisition of >computer languages is rather different... My point is that there are no implications for the manner in which language acquisition is achieved from the phenomenon of novel sentences. This is because the phenomenon itself is not limited to human language, but is an inherent part of any sort of `language'. The possibility of `novel sentences' is built into any abstract code that carries information, regardless of how that code came into being, or what devices are used in order to interpret it. DNA code, which presumably came into being by a long and tortuous evolutionary path, also admits of novel sequences. To the extent that we are able to decode DNA sequences, we would be able to predict a resultant mutation from a change in the sequence, before any such change and mutation occurred. That is, novel sentences do have objective meaning. And yet we can hardly suppose that the chemicals on which DNA code operate have been fitted with `a language acquisition device.' (`Translation' of DNA code into the appropriate amino acids takes place through chemical reactions that are not unique to DNA interpretation.) The code is self-executing and there is no centralized control over the process by anything resemblng a language acquisition device. On the other hand, the computer example that I mentioned earlier has precisely such a device (the CPU), since it was designed for a particular purpose and manufactured to specification. The basic rules of information coding are universal and trascend the physical mechanisms that make use of the code. Assuming that we know nothing else about human beings besides the rules of the languages they speak, (including the fact that novel sentences have more or less predetermined meanings within a language community), this would tell us nothing about whether the sentences are interpreted -- or the language is acquired -- through a structure in the brain that is dedicated to language acquisition, or through a more flexible priming mechanism involving pattern recognition after repeated exposure. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From john at RESEARCH.HAIFA.AC.IL Wed Jun 24 05:59:59 1998 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Wed, 24 Jun 1998 08:59:59 +0300 Subject: novel sentences Message-ID: My mention of `8-year-old' may perhaps have been hyperbola, but in fact I cannot remember ever having heard an undergraduate student, however naive, express a view which suggested that they were under the impression that human beings could only repeat sentences which they had literally heard or read, word for word. I would certainly agree that lay people are naive about many things regarding language, but in my experience this does not seem to be one of them. However, this may be a consequence of the time I was educated (grad school late 70's and early 80's) and the time I have been teaching. Some of the statements I have read about language in the 1950's sound so bizarre from a contemporary standpoint that I can only assume that given certain assumptions specific to certain cultures and times, what is intuitively obvious or trivial at one point in time might be a deep insight at another. John Myhill >With respect to John Myhill's "the whole discussion seems silly and >pointless and the kind of argument you would only need to make to an >8-year-old", it is my experience that most lay people, such as >undergraduates in intro linguistics classes, are sufficiently naive about >language that nearly nothing is obvious to them and that this kind of >observation,with elaboration by example, is in fact quite instructive. >The fact that so many of the sentences we hear are novel does seem to me >an important and fundamental property of language. Furthermore, it >represents a fundamental difference between sentences and words (at least >for most languages). From that perspective, I see this as hardly "silly >and pointless". > >It is true that this property of language was used in arguments against >behaviourism - and not just a strawman position, but versions of >behaviourism that were once dominant in psychology - but I would have >thought that this was something sufficiently basic to be something that is >common ground for nearly all linguists, formalist, functionalist, >cognitivist or whatever. > >I suspect, as Scott Delancey suggests, that the novelty of sentences one >hears is probably exaggerated, but the basic point still holds. > >Matthew Dryer From eitkonen at UTU.FI Wed Jun 24 20:45:53 1998 From: eitkonen at UTU.FI (Esa) Date: Wed, 24 Jun 1998 11:45:53 -0900 Subject: novelty Message-ID: Dear colleagues Thank you for the responses (some of which I got privately); they were all useful. The consensus seems to be that in this particular context these two expressions are synonymous: 'A is completely novel with respect to B' = 'A is not exactly identical with B'. In any other context, of course, they are not synonymous, so they should not be it here either. From this notion of 'complete novelty' it follows, for instance, that a grammar as simple as the one consisting of rules 'S -> Sa' and 'S-> a' generates an infinite number of completely novel sentences. Therefore my sympathy is with Fred Householder,who - in a review in 1969 - commented upon the claim of complete novelty as follows: "[This is] a claim so obviously false that [those who make it] must mean something else, though I cannot for the life of me figure out what." Incidentally, the fact that most sentences that we hear are new, i.e. not repetitions of what we have heard before, was duly noticed by linguists like Hermann Paul and Bloomfield. It was also a common-place in the grammatical traditions of India and Arabia. Esa Itkonen From dick at LINGUISTICS.UCL.AC.UK Wed Jun 24 09:06:47 1998 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Wed, 24 Jun 1998 10:06:47 +0100 Subject: novel sentences Message-ID: Matthew Dryer and Dan Slobin both think it is worth pointing out that every sentence is novel. Does anyone have any evidence that anyone ever thought otherwise? The only evidence I can think of is Noam Chomsky's odd definition of a language as a set of sentences. Do lay people think that when they take a course in (say) German they're going to learn a list of sentences? I'd have thought that lay people were much more likely to think of a language as a set of words. Maybe I'm focussing on the wrong question. Are we really asking whether lay people are aware that there are rules controlling the ways in which words are combined? If so that's a very different question, because it's possible to define all the possible combinations of words without mentioning sentences at all. (That's how it's done in dependency grammars.) ============================================================================== Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From macw at CMU.EDU Wed Jun 24 17:24:26 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Wed, 24 Jun 1998 11:24:26 -0600 Subject: novelty, Heraclitus, and children Message-ID: I think that Aya Katz is correct in noting that "The possibility of `novel sentences' is built into any abstract code that carries information, regardless of how that code came into being, or what devices are used in order to interpret it." The problem is even more general that that. As Heraclitus reminds us, "You could not step twice into the same rivers, for other waters are ever flowing on to you." Does this lead us to attribute creativity and novelty to rivers? Surely something is missing in any such analysis. I would agree with Householder that the claim of novelty is so obvious that it is surprising that it is even made. After all, even a finite state machine can produce many "novel" strings and no one would jump up and down about a language instinct on the basis of knowing that language is a finite state automaton. At the same time, Ellen Prince and Dan Slobin are correct in pointing out the importance of something close to this issue for child language acquisition. I would argue that the type of novelty that is really interesting for child language researchers is something very different from what has been mentioned in this discussion so far. It is the fact that children often produce sound strings, words, and utterances that diverge in revealing ways from the norms of the adult community. What is crucial is not novelty, but the failure to fully internalize or obey social norms. These errors demonstrate that the child is making his or her independent contribution to the language learning process. In this sense, "errors" such as "I poured my glass empty" reveal creative aspects of language learning. For each creative error there are probably eight creative productions that just happen to match the social norms. Bolinger's view of language as built up from disparate pieces helps us out here. The child has one piece called "go" and another piece called "past tense = ed" and just doesn't remember that the socially sanctioned way of saying this is "went". It is this creativity that demonstrates a lack of full internalization of the social norms and which also gives us some of our best evidence regarding how the child learns and uses language. --Brian MacWhinney From dhargreave at FACULTYPO.CSUCHICO.EDU Wed Jun 24 17:31:00 1998 From: dhargreave at FACULTYPO.CSUCHICO.EDU (David Hargreaves) Date: Wed, 24 Jun 1998 10:31:00 -0700 Subject: novel sentences/folk psychology Message-ID: It seems to me useful to distinguish two issues here. First, there are the empirical questions regarding the biological/cognitive status of generative rules vs constructions and so on. Second, there are the questions regarding the "folk psychology" of language rules. Let me address the latter: In teaching thousands of undergraduates over the last six years at CSU, Chico, a small state school in California that requires an Intro to Linguistics and Intro to Second Language Acquisition for all (k-12) teaching credential candidates, I have learned never to underestimate the depth t o which a naive behaviorism is part of the folk psychology of not only undergraduates, but also faculty in Education, Social Sciences, and the Humanities. The naive, but deeply held, intuitions that parents "teach" their children language, that language structure is "conditioned" by culture, that grammatical systematicity and material/intellectual culture are coextensive, and that language learning is mostly "memorizing phrases" are still widely shared across the social and intellectual landscapes. In this sense, giving undergraduates a close look at the evidence and arguments about "novel sentences," especially L1 and L2 errors, the "poverty of the stimulus," "colorless green ideas," and other mainstays of the linguistics introduction still function as powerfully persuasive tools for real intellectual and attitudinal growth by the socially important population of K-12 teachers, not to mention faculty in Ed, Soc.Sci, and Humanities. And even though funknetters have had much to say about the shortcomings of Pinker's "Language Instinct" and the Human Language Video series, both have worked for me in opening the eyes of many an undergraduate as well as faculty. The "Standard Social Science Model" to which Pinker refers is alive and well in various incarnations of postmodernism, cultural studies, multiculturalism, and other common themes in contemporary undergraduate programs in the US, especially teacher training programs, in which language as an information processing and embodied cognitive system plays second fiddle to the focus on socioeconomic and cultural determination of language form and content. I've had some success with a bait and switch routine: the old arguments still work to undermine the naive behaviorism which then sets the stage for bringing in cognitive/functionalist questions. It seems to work, at least some of the time. -david hargreaves From kaitire at UNICAMP.BR Wed Jun 24 23:17:44 1998 From: kaitire at UNICAMP.BR (Andres Pablo Salanova) Date: Wed, 24 Jun 1998 20:17:44 -0300 Subject: New mailing list for South American indigenous languages Message-ID: ** Instrucciones sobre como obtener una descripcion de la lista en castellano y portugues al final de este mensaje. ** Instrucoes para obter uma descricao da lista em espanhol e portugues, no final desta mensagem. Our apologies if you receive this message more than once. ==== LING-AMERINDIA ==== DISCUSSION LIST FOR SOUTH AMERICAN INDIGENOUS LANGUAGES The LING-AMERINDIA list was proposed at the Indigenous langages workgroup at the XIII National Congress of the Brazilian Association of Graduate Programs in linguistics. It is intended for open discussion of problems in the description and analysis of syntax, morphology, phonology and lexicon of South American indigenous languages. Postings should preferably be in Spanish or Portuguese. All postings will be archived and will shortly be accessible through anonymous FTP and WWW. To subscribe, send an e-mail message with SUBSCRIBE in the first line of the body to LING-AMERINDIA-request at unicamp.br. Postings should be sent to LING-AMERINDIA at unicamp.br. ---------------------------------------------------------------------- \__ LING-AMERINDIA / --, Informaciones: envie un mensaje con HELP LING-AMERINDIA | ] en la primera linea a la direccion "Comandos" dada abajo. \ | Informacoes: enviem uma mensagem com HELP LING-AMERINDIA | / na primeira linha ao endereco "Comandos" dado embaixo. \ | \| . Comandos: LING-AMERINDIA-request at unicamp.br . Supervisor: LING-AMERINDIA-owner at unicamp.br ---------------------------------------------------------------------- From amnfn at WELL.COM Thu Jun 25 16:13:51 1998 From: amnfn at WELL.COM (A. Katz) Date: Thu, 25 Jun 1998 09:13:51 -0700 Subject: No subject Message-ID: Jon Aske on Tue, 23 Jun 1998 19:23:36 -0400 wrote: >I think nobody would disagree with the claim that we all learned in >Linguistics 101 that the number of possible sentences in a language >is infinite. I didn't respond immediately, because I wanted to see if anyone else would disagree or have any comments on this point. The number of possible sentences in a language is infinite, only if we assume the following: a) that there is no upper bound on the length of a possible sentence and b) that there isn't a rate of historical change associated with repeated use that would eventually lead to the evolution of a form of the language that is not intelligible to the speakers of the earlier sentences. The second issue is very complicated and would require too lengthy a discussion. But the first issue is pretty simple. Assuming that we are not dealing with a mathematical construct, but are talking about language as it is used by human beings, there are physical limitations to our processing abilities in real time. Give a speaker too long and complicated a sentence, and he will not be able to understand it. While the exact limit may vary from individual to individual, I think that we could establish a factual upper bound that would hold true for the species as a whole. Writing allows for longer sentences, because it permits us more time in which to process and requires less of our storage capacity. But even in writing, there is an upper bound past which no one -- not even a well-educated German :-> -- is able to retain in short term memory the variables at the start of a sentence in order to properly appreciate their logical effect on input toward the end of the sentence. So long as there is an upper bound to the length of a possible sentence, then the number of possible sentences in a language is not infinite. (It may be very large, allowing for an immense number of novel sentences to be uttered in one lifetime, but -- even given an immortal speaker -- generating an infinite number of sentences in an unchanging language would eventually lead to repetition.) I'm pretty sure that I am not the first to have thought of this. Can anybody provide me with citations to existing texts in which this argument is made? I would be most grateful. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From r.woolley at ZAZ.SERVICOM.ES Thu Jun 25 18:10:40 1998 From: r.woolley at ZAZ.SERVICOM.ES (Reuben Woolley) Date: Thu, 25 Jun 1998 20:10:40 +0200 Subject: Novelty Message-ID: A. Katz wrote: > > Jon Aske on Tue, 23 Jun 1998 19:23:36 -0400 > wrote: > > >I think nobody would disagree with the claim that we all learned in > >Linguistics 101 that the number of possible sentences in a language > >is infinite. > > I didn't respond immediately, because I wanted to see if anyone else > would disagree or have any comments on this point. > > The number of possible sentences in a language is infinite, only if > we assume the following: > > a) that there is no upper bound on the length of a possible > sentence > > and > > b) that there isn't a rate of historical change associated with > repeated use that would eventually lead to the evolution of a form of > the language that is not intelligible to the speakers of the earlier > sentences. > Even if we fix an upper bound to the length of a possible sentence based on comprehensibility and varying from one individual to another, I would still suggest that the number of possible sentences in a *living* language is infinite. The discussion so far only seems to be concerned with grammatical generation and has not taken lexis into account at all. What certainly is elementary knowledge is that new words are introduced continually and old words are given new meanings. Therefore, to have a limited upper bound to the number of possible sentences would mean that, as well as trying to measure the limit of intelligiblity, we would have to fix the language at some moment in time. I can't see that there is any interest in doing that. Reuben Woolley c/ Almagro, 5 50004 Zaragoza Spain From meira at RUF.RICE.EDU Fri Jun 26 06:29:13 1998 From: meira at RUF.RICE.EDU (Sergio Meira S.C.O.) Date: Fri, 26 Jun 1998 01:29:13 -0500 Subject: your mail In-Reply-To: <199806251613.JAA12049@well.com> Message-ID: Aya Katz wrote: > >I think nobody would disagree with the claim that we all learned in > >Linguistics 101 that the number of possible sentences in a language > >is infinite. > > I didn't respond immediately, because I wanted to see if anyone else > would disagree or have any comments on this point. > > The number of possible sentences in a language is infinite, only if > we assume the following: > > a) that there is no upper bound on the length of a possible > sentence > > and > > b) that there isn't a rate of historical change associated with > repeated use that would eventually lead to the evolution of a form of > the language that is not intelligible to the speakers of the earlier > sentences. Someone once made the following comparison: saying the number of possible sentences in a language is infinite (which I also interpret as implying the possibility of sentences of infinite length) is like saying that a baseball or volleyball match could last forever. And, in both cases, it is true that they actually don't-- neither do sentences of infinite length occur (just imagine the philosophical/pratical problems involved!-- wouldn't fit in this universe, etc.), nor endless baseball/volleyball matches. Yet there is a difference between volleyball (which I, being from Brazil, know better than this arcane game called baseball) and soccer or basketball, where there is a real time limit that must be respected. Sentences are never infinite-- but they don't seem to be bounded either. You can never point to a certain length and say, that's the boundary-- shorter than that is OK, longer than that is impossible. I take this to be the 'grain of truth' behind the entire infiniteness-of-language discussion in formalist circles (i.e. how real-world contingencies force all sentences to end, but as a performance phenomenon rather than as a competence one). When functionalists say, 'sentences aren't infinite!', formalists think they mean that volleyball is like basketball. And when formalists say, 'sentences are theoretically infinite', functionalists think they mean real-life volleyball could go on forever. Is it possible that both sides are exaggerating something? Just a thought... Sergio Meira meira at ruf.rice.edu From jhudson at CUP.CAM.AC.UK Fri Jun 26 09:55:08 1998 From: jhudson at CUP.CAM.AC.UK (Jean Hudson) Date: Fri, 26 Jun 1998 10:55:08 +0100 Subject: novel sentences Message-ID: At 09:13 25/06/98 -0700, Aya Katz wrote: >The number of possible sentences in a language is infinite, only if >we assume the following: > > a) that there is no upper bound on the length of a possible > sentence > > and > > b) that there isn't a rate of historical change associated with >repeated use that would eventually lead to the evolution of a form of >the language that is not intelligible to the speakers of the earlier >sentences. [...} >So long as there is an upper bound to the length of a possible >sentence, then the number of possible sentences in a language is not >infinite. (It may be very large, allowing for an immense number of >novel sentences to be uttered in one lifetime, but -- even given an >immortal speaker -- generating an infinite number of sentences in an >unchanging language would eventually lead to repetition.) I agree that the first issue is relatively simple, but either we ARE dealing with a mathematical construct, in which case it's fair to hypothesize an upper limit to the number of possible sentences immortal speakers of an unchanging language could produce. Or we're talking about language as it is used - in changing ways - by mere mortals. Maybe there's sth I've misinterpreted here, but Aya seems to be claiming the latter while arguing that productivity is finite 'given an immortal speaker and an unchanging language'. I'd say the dichotomy is non-existent: it rests upon differences in the focus of interest of the interlocuters in the debate. Logicians and theorists might argue that the number of possible sentences in a language is finite; functional, applied, and descriptive linguists might argue that it is infinite. Both are right, of course. They are using the same make of camera with different lenses. This kind of discussion tends to be irritating. The second issue is much more interesting. 'Repetition' is too often confused with 'lack of originality', but it is only through repetition that language change and revitalization can come about (cf the literature on grammaticalization and, in particular, Haiman 1994 on ritualization). So, there is novelty in the production of sentences never before uttered and, phoenix-like, there is novelty and language renewal in the frequent repetition of sentences (or syntagms). Surely this is evidence in support of the 'infinite' in language production? Jean Hudson ---------------------- Jean Hudson Research Editor Cambridge University Press The Edinburgh Building Cambridge CB2 2RU email: jhudson at cup.cam.ac.uk phone: +44-1223-325123 fax: +44-1223-325984 (http://www.cup.cam.ac.uk/) mail address: Cambridge University Press Publishing Division The Edinburgh Building Shaftesbury Road Cambridge CB2 2RU UK From ph1u+ at ANDREW.CMU.EDU Fri Jun 26 11:35:55 1998 From: ph1u+ at ANDREW.CMU.EDU (Paul J Hopper) Date: Fri, 26 Jun 1998 07:35:55 -0400 Subject: novel sentences In-Reply-To: Message-ID: A footnote to Jean Hudson's remarks on novelty in language: Jean didn't mention her own important recent book 'Perspectives on Fixedness: Applied and Theoretical' Lund University Press, 1998. - Paul From nc206 at HERMES.CAM.AC.UK Fri Jun 26 16:36:55 1998 From: nc206 at HERMES.CAM.AC.UK (N. Chipere) Date: Fri, 26 Jun 1998 17:36:55 +0100 Subject: Novelty Message-ID: The issue of linguistic novelty is a key theme in my current research and I would like to share my thoughts on the issue as well as some of my experimental findings. I hope I will be forgiven for the somewhat long message, but I need to get some feedback. On the face of it, the statement that language users can understand novel sentences appears obvious and redundant. However, the statement serves an important function of constraining theories about the nature of linguistic knowledge and ultimately, about the nature of the human mind. The basic argument (Fodor & Pylyshyn, 1988) is as follows: If knowledge of language is considered to be a list of sentences, then there is no way to account for the ability of native speakers to produce and understand sentences they have never heard before. On the other hand, this ability can be explained if knowledge of language is seen as an infinitely generative set of grammatical rules. And if it is accepted that native users of a language possess generative grammars, then certain important constraints on theories of cognitive architecture must be observed. Without going into the details, observing such constraints leads to the hypohesis that the mind has the general architecture of a digital computer. So when it is said that native speakers of a language can understand novel sentences, a deeper statement is being made, it seems to me, about the nature of linguistic knowledge and the about nature of the human mind. However, it doesn't follow from the fact that native speakers can understand novel sentences that knowledge of language takes the form of a generative grammar. Everyday, human beings do things they have never done before but this ability does not lead to the conclusion that their actions are the product of generative rule systems. It's quite reasonable to suppose that the ability to deal with novel situations depends on previous experience and that novel situations will become more difficult to deal with the more they stray from the range of an individual's experience. This line of thinking forms the basis of an experiment which I carried out to test the connection between novelty and generativity. According to the line of thinking outlined in Fodor & Pylyshyn (1988), all the sentences of a language belong to a generated set, and they should be equally comprehensible to native speakers. That is to say, since what a native speaker knows about his or her language is a set of rules capable of interpreting and producing any sentence in the language, a native speaker should be able to understand all possible sentences in his or her language equally well, provided that performance factors are taken into account. On the other hand, if linguistic knowledge, like other kinds of knowledge, depends on experience, then native speakers of a language should find familiar sentence types easier to understand than unfamiliar ones. I compared the ability of three groups of subjects to understand grammatically unusual sentences under conditions in which memory load was eliminated. Group 1 consisted of graduate native speakers of English, Group 2 consisted of graduate non-native speakers of English and Group 3 consisted of non-graduate native speakers of English. The subjects were asked to answer comprehension questions about sentences with highly unusual syntactic structures, such as: 1. The doctor knows that the fact that taking good care of himself is essential surprises Peter. example question: What does the doctor know? 2. The bank manager will be difficult to get the convict to give a loan to. example question: Who will find it difficult to do something? 3. The lady who Peter saw after overhearing the servant proposing to dismiss had lunch in a cafe. example qustion: Who might be dismissed? (These sentences may strike many as ungrammatical, but in fact they are simply unfamilar combinations of familiar sentence types (adapted from Dabrowska, 1997)). I obtained both comprehension and reading time data from the experiment, but I will mention only the comprehension data. The key results were that the non-native graduates obtained the highest scores, followed by the native graduates, with the least scores coming from the native non-graduates. The native non-graduates were also most affected by plausibility and often ignored syntactic constaints whereas the non-native graduates were least affected by plausibility and showed the greatest mastery of syntax. The non-native graduates, by the way, had learned English largely through formal instruction. Most of them were also speakers of East European languages, which, I have been told is one possible explanation for their facility with complex syntax. However, all groups performed equally well on control sentences, which were formed out of familar sentence types. These results give empirical support to the logical argument that there is no necessary connection between novelty and generativity. It is quite possible to generate a very large number of novel sentences out of a small number of familiar sentence types. The fact that most native speakers can understand such sentences does not entail that they can readily understand all possible sentences in the language. In other words, being a native speaker of a language and being able to understand novel sentences in that language does not entail possession of a generative grammar of the language. More details about the experimental design, materials, procedure and results are documented in an experimental report. The report also reviews previous psycholinguistic findings which indicate that native speakers of English often lack full grammatical productivity, and that education appears to be an important variable in grammatical skill. The main argument developed in the report is that linguistic ability shares many of the key traits of skilled performance and can be accounted for without recourse to an infinitely generative set of grammatical rules. I am keen to have feedback on the report, which I can make available to anyone who is interested. Ngoni Chipere Darwin College University of Cambridge From amnfn at WELL.COM Fri Jun 26 17:39:32 1998 From: amnfn at WELL.COM (A. Katz) Date: Fri, 26 Jun 1998 10:39:32 -0700 Subject: No subject Message-ID: I'm glad to see that my last posting generated some interesting responses. By and large, I do think that one point may have been misunderstood. I was not arguing that the length of a sentence in natural language is limited by the mortality of its speakers or the amount of time in the day that they can devote to speaking. While that point is valid, I would tend to agree with Jean Hudson that it is not particularly interesting. My focus was on the length limitation imposed on a sentence by the processing abilities of speakers and hearers. Sergio Meira observes: "Sentences are never infinite-- but they don't seem to be bounded either." My point is that if we are talking about natural language processing in real time -- they ARE bounded. (Although it is unclear precisely where the boundary lies, and it would require a considerable quantity of experimental data to pinpoint.) Using Sergio's analogy of baseball and volleyball matches, I would point out that the unit more comparable to a sentence in these events is probably the amount of time the ball can be kept in the air without touching the ground, not the amount of time that the game can go on. While our lives are finite and the amount of time we spend talking is limited, that is not the real limitation on the length of a sentence. The President of the United States could give a two hour speech, and people would take time out from their schedules to listen to it, but what is the likelihood that the whole speech would consist of a single sentence? Or take the Mark Twain joke about a multi-volume opus in German in which all the verbs appear in the last volume. That would never happen in real life, (not even in German), because such a work, while it might be logically meaningful, could not be processed by any human being. We may occasionally listen to a long speech or read a very long book, but in order for it to be comprehensible, it must be broken down into smaller, self-contained parts. That is a limitation placed on language by human cognition. The reason this upper bound is very interesting to me is that it has profound implications for language change and grammaticalization. I would be most grateful for any of you who could provide citations to works on this subject or who might have expertimental data that would shed light on the issue. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From meira at RUF.RICE.EDU Fri Jun 26 19:09:02 1998 From: meira at RUF.RICE.EDU (Sergio Meira S.C.O.) Date: Fri, 26 Jun 1998 14:09:02 -0500 Subject: your mail In-Reply-To: <199806261739.KAA22051@well.com> Message-ID: > > My focus was on the length limitation imposed on a sentence by the processing > abilities of speakers and hearers. Sergio Meira observes: "Sentences are > never infinite-- but they don't seem to be bounded either." My point is that > if we are talking about natural language processing in real time -- they ARE > bounded. (Although it is unclear precisely where the boundary lies, and it > would require a considerable quantity of experimental data to pinpoint.) How different is it to say that sentence length is bounded by the processing abilities of speakers and hearers from the old claim that this is a 'performance phenomenon'? And, to me, it seems fascinating that there should be a limit, but that it is hard to pinpoint... I expect there to be individual differences there, too. And the differences might also depend on sentence structure: certain kinds of clauses will probably be more difficult to lengthen infinitely (though they should be, theoretically, just as 'lengthenable' as the others). Sergio Meira meira at ruf.rice.edu From dsoliver at EARTHLINK.NET Fri Jun 26 19:50:53 1998 From: dsoliver at EARTHLINK.NET (Douglas S. Oliver) Date: Fri, 26 Jun 1998 12:50:53 -0700 Subject: Novel Sentences Message-ID: Dear Funknetters, I have been following this discussion with some interest but not with the concerns expressed so far (unless I have missed something). Over the last two + decades, many functionalists have devoted a good amount of time questioning the wisdom of giving so much weight to the sentence as a primary unit of analysis. Wally Chafe and others have done a good job of demonstrating the value of using language segments based on tone/prosody, which do have very real constraints. I would like to ask why this focus on the sentence has been renewed. This has brought us back to philosophical discussions that often only work to remove us from functional concerns. I would like to ask how we might bring cognitive, biological, social, cultural, etc. concerns into the discussion, using real discourse examples. Please don't misunderstand me, I find the discussion so far fun and generally interesting; I just wonder where the functional perspective has gone. -- Douglas -- |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~| |Douglas S. Oliver | |Department of Anthropology | |University of California | |Riverside, CA 92521 | |e-mail: dsoliver at earthlink.net | | or: douglaso at citrus.ucr.edu | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alphonce at CS.UBC.CA Fri Jun 26 20:59:55 1998 From: alphonce at CS.UBC.CA (Carl Alphonce) Date: Fri, 26 Jun 1998 13:59:55 -0700 Subject: Novel Sentences Message-ID: I have been lurking in the background, reading the discussion with interest. I am afraid I don't quite see what the real issue here is. If I recall correctly, the discussion started with a question regarding the claim that people can produce novel sentences. Since then it seems to have moved on to a discussion of whether or not the set of sentences in a language is finite or infinite. I don't think it can be disputed that the number of sentences uttered by a person, or uttered by all persons in all of history is finite. The number of speakers is finite. Each speaker has a finite lifespan. No sentence is of infinite length. An so on. At any point in time it is (theoretically) possible to construct the (finite) set of all natural language sentences every uttered by anyone in any language. The question of the nonfiniteness of such a set comes into play when you want to construct a theory to account for those sentences. By assuming that the set is in principle infinite, you can use recursive rules to capture significant generalizations about the structure of the sentences in the set. If you insist that the set is finite, then recursive rules cannot be permitted (without ad-hoc constraints on their applicability). Because people do not in fact use or comprehend sentences of arbitrary length, it is not enough to have only a theory which permits arbitrarily large sentences. But this is what the competence performance distinction is all about. A competence theory is about our idealized capacity for language, while a performance theory can be viewed as constraints of a non-grammatical nature which limit what we are able to produce and comprehend. These are abstractions that we use when investigating language. These abstractions happen to be very useful, but they are not themselves fact. Other abstractions yield theories with different properties, empirical coverage, and predictive power. Perhaps I am just being obtuse, but what is the real issue that people are discussing? Carl -- Carl Alphonce / email: alphonce at cs.ubc.ca Department of Computer Science / phone: (604) 822-8572 University of British Columbia / FAX: (604) 822-5485 Vancouver, BC, CANADA, V6T 1Z4 / http://www.cs.ubc.ca/spider/alphonce/home From amnfn at WELL.COM Fri Jun 26 23:15:37 1998 From: amnfn at WELL.COM (A. Katz) Date: Fri, 26 Jun 1998 16:15:37 -0700 Subject: No subject Message-ID: 1. ON COMPTENCE/PERFORMANCE Sergio Meira and Carl Alphonce are correct in noting that limits on sentence length due to processing difficulty are often ascribed to performance errors. I do not find this a satisfactory solution for the following reason: Generativists make a big fuss over the essentially and uniquely `human' language instinct and capacity. Carl Alphonse echoes this sentiment when he says: >A competence theory is about >our idealized capacity for language, while a performance theory can be >viewed as constraints of a non-grammatical nature which limit what we >are able to produce and comprehend. OUR idealized capacity? But what is described is an abstract construct of language, totally divorced from the limitations of human potential. `Competence theory' when used in this way is about the flexibility built into any abstract code of information, (DNA code, computer code, etc.) regardless of whether humans have any special inborn capacity for decoding it. It's about the universal rules of information theory, not about a human language instinct. As such, the `competence/performance' dichotomy is a misnomer, and not a trivial one. If I mispeak and accidentally utter a sentence where the verb does not agree with the subject, although on a good day I have no difficulty with that task, then this a performance error. But if I am unable to comprehend a sentence in my native language due to its complexity, even on the best of days -- and if all humans consistently manifest the same disability -- that's a competence problem, by any normal definition of competence. If we buy into that other, specialized meaning of competence, we give up the question of innateness before we've even begun. 2. WHY DOES IT MATTER HOW LONG A SENTENCE CAN BE? Douglas S. Oliver wrote: "I would like to ask how we might bring cognitive, biological, social, cultural, etc. concerns into the discussion, using real discourse examples." The application of the upper bound on unit length (whether you view a sentence as a logical proposition or use prosodic evidence of sentence boundaries), is important to grammaticalization theory because the drive to maintain optimal length is one of the factors responsible for the reduction and fusion of formerly independent elements into new grammmatical patterns. Ultimately this limitation on the human capacity to process shapes acceptable grammatical configurations -- and patterns of grammatical change. Show me any instance of grammaticalization -- and chances are that limits on length had something to do with it. That's why I'm looking for both theoretical and experimental work on the subject of the upper bound on sentence length. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From nrude at UCINET.COM Sat Jun 27 00:19:55 1998 From: nrude at UCINET.COM (Noel Rude) Date: Fri, 26 Jun 1998 16:19:55 -0800 Subject: Infinity Message-ID: Fellow funknetters, Thought provoking--especially Aya Katz' notion that open-endedness is inherent in any informational system. Perhaps the main distinction that unites us functionalists is the communicative theory of language: language codes and communicates information, complex information. That's why it exists, and that's why its open-endedness. Now when the structuralists focus on the clause or sentence ("the minimal unit of information"), those of us into texts should know that every clause has a unique context, and every text is unique (except, of course, where us old folks get into these "scripts"). The structuralists emphasize an infinity of sentences so they can argue for syntactic structure. We need open-endedness too because language communicates, and therefore we also need structure. One found this jabber "irritating", no doubt because of the nit-picking over infinity. Remember, infinity can never be traversed. There will never be an infinity of time. You can never pile up an infinity of words, clauses, sentences, texts. The point isn't that you can never arrive there. It's that we're headed in that direction. A large amount of human language IS novel--that should be the point. Noel From nrude at UCINET.COM Sat Jun 27 00:54:52 1998 From: nrude at UCINET.COM (Noel Rude) Date: Fri, 26 Jun 1998 16:54:52 -0800 Subject: Upper Bounds Message-ID: Howdy again, So you're looking for theoretical and experimental work on the upper bound on sentence length. Well, I don't do that kind of stuff, but you might start with things like valence theory which, for example, suggests an upper limit of three arguments for any verb. Then there's stacking up modifiers, and there's subordination. Are there languages with built in structural limits here? If so, does this corelate with cog-sci tests? Suppose you are able to describe pretty well these upper limits. Then I'd like to know why. Would it all be neural capacity? Or might there even be--as you intimate--limits imposed by information theory? Might we sometimes be too worried about the hardware and not worried enough about the software? The limit on valence probably relates to the perception of events, to the notions of volition (agent), consciousness (dative goal), and participants lacking either (patient). It also seems to corelate with three levels of topicality. What I'd like to know is whether all this derives from the limited capacity of this idiosyncratic machine (our brain), or whether it is how the world really is. And information theory--could it be any other way? What really is the relationship between the world, information, and the machine (the brain)? Noel From meira at RUF.RICE.EDU Sat Jun 27 08:26:16 1998 From: meira at RUF.RICE.EDU (Sergio Meira S.C.O.) Date: Sat, 27 Jun 1998 03:26:16 -0500 Subject: Upper Bounds In-Reply-To: <359442C4.559C@ucinet.com> Message-ID: Restrictions on 'event complexity' can bear on sentence length-- I recall from discussions on psychology that there is some limit to the number of independent entities that we can keep active at the same time in our minds (seven is the number I remember), which may imply an upper limit to the number of independent participants that can co-exist in a sentence. So, if we want to talk about more than seven independent participants, it would seem that we'd have to use more than one sentence; we'd be forced to stop a sentence even before its length in sheer number of words became unbearable. (Colin Harrison at Rice probably has the original references, I believe, in case you don't). This is, however, 'event complexity' rather than sheer sentence length. I agree with Aya and Noel that any real coding system is subject to length constraints-- e.g. DNA sequences cannot be infinite. But this reflects, as Noel pointed out, a limit of our universe. The only things that can be truly inifinite exist in the realm of abstractions par excellence -- mathematics. It is true that the series of natural numbers is infinite; but this is a consequence of the fact that Peano's axioms for natural numbers have 'adding one more' as an operation that can be repeated forever. There is nothing in the universe, not even subatomic particles, that you could keep adding to a pile forever. So, Peano's axioms are an abstraction-- mathematicians assume a world in which 'adding one more forever' is thinkable. (Incidentally, Quantum Mechanics with wave-particle dualities and indeterminacies challenges the possibility of 'adding one more' forever at the subatomic level from another viewpoint-- but this is a different story). This shows us what is going on when formalists want to see infinite-length sentences as a theoretical possibility. It would seem that there are certain aspects of language that are they way they are because the world is the way it is-- i.e. limits set by physics, chemistry, biology, etc., rather thanm by communication alone. Formalists isolate these aspects of the real world; they want to see 'language standing alone', so they put it in a separate world, where only linguistic factors count. The relationship is not unlike that between mathematics and the real world... You gain the notion of infinity, for whatever theoretical advantages it might buy you, but you have to admit that everything else is less important. Formalists have to say that all of reality, in its entirety, with all its physical, chemical, biological etc. restrictions is 'contingent', 'less important', 'non-linguistic'-- that 'reality' is 'performance'... Isn't that an interesting world... Sergio Meira P.S. I wondered if anyone knows whether real-world limitations for other coding systems also have consequences for their functioning and their structures-- i.e. any consequences of length constraints on DNA sequences for genetics? From amnfn at WELL.COM Sat Jun 27 18:43:20 1998 From: amnfn at WELL.COM (A. Katz) Date: Sat, 27 Jun 1998 11:43:20 -0700 Subject: No subject Message-ID: 1. LIMIT ON PARTICPANTS Noel Rude wrote: > So you're looking for theoretical and experimental work on the upper >bound on sentence length. Well, I don't do that kind of stuff, but you >you might start with things like valence theory which, for example, >suggests an upper limit of three arguments for any verb. Then there's >stacking up modifiers, and there's subordination. Are there languages >with built in structural limits here? If so, does this corelate with >cog-sci tests? I don't normally do that sort of thing myself, which is why I'm asking. I remember from my field work in Pangasinan that we tried to cram as many participants as we could into a sentence, and there was definitely a limit on the number you could get per clause. (1) si Anita impakan tomai mangga ed posa ed ketsara 'Anita fed the mango to the cat with a spoon' Example (1) was a possible sentence using a single clause, but I believe the informant didn't feel as comfortable with it as with (2). (2) si Anita impakan toma i mangga ed posa ya inosaran to i ketsara 'Anita fed the mango to the cat with a spoon' In (2), the `with a spoon' part is in a separate clause: `ya inosaran to i ketsara' meaning roughly `a spoon was used'. It could be that one of the reasons (1) was awkward was that you had to use `ed', an oblique marker, twice. I suppose that having a limited number of separate case or focus marking devices in a language might be one reason for the limit on participants per clause. But the fact that the number of these markers is limited may be directly related to cognitive constraints. I'd be happy to hear from people who have done research on the subject of grammatical limits on clause length and their relation to processing requirements. 2. UNIVERSAL LIMITS ON CODING VS. HUMAN LIMITS NOEL RUDE also wrote: >What I'd like to know is whether all this derives from the limited >capacity of this idiosyncratic machine (our brain), or whether it is >how the world really is. I think there are limits on communication that are universal to all abstract codes of information. I also believe that human cognitive capacity is not anywhere near those limits. It's very important to distinguish the two issues, lest we be tempted to make far flung claims about the nature of our biological language processing apparatus that are actually based on much more general principles. SERGIO MEIRA wrote: >I wondered if anyone knows whether real-world limitations for other >coding systems also have consequences for their functioning and their >structures-- i.e. any consequences of length constraints on DNA sequences >for genetics? I don't know the answer to that, but I suspect that it's `yes'. Interesting aside about novel sequences and DNA code. With the exception of mono-zygotic twins, every human being has a unique karyotype. (I'm talking about genetically normal specimens, not possessing any sort of mutation.) The number of possible `grammatical' combinations of genes that would produce a normal human being is vast -- but finite! That number is so large that, given the limits on population growth placed on human beings by the resources of the earth, chances are the sun would die out before we had a random repetition of a human being's genetic make-up. By the same token, the finite number of available sentences in an unchanging language is no practical bar to occurrence of novel sentences. But I do think that population size has something to do with the rate of change in the structure of a language. Small isolated communities tend to be linguistically conservative. Languages spoken by vast populations are more given to change. Is there a direct relation between the number of sentences uttered in a language and its rate of change? If so, is this related to the upper bound on sentence length, coupled with the desire for originality? Or is it just a matter of the fabric of the language wearing down and transmuting with use? Are there other possible explanations? --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From ebatchel at EMAIL.GC.CUNY.EDU Sun Jun 28 01:47:26 1998 From: ebatchel at EMAIL.GC.CUNY.EDU (Eleanor Olds Batchelder) Date: Sat, 27 Jun 1998 21:47:26 -0400 Subject: Length of Sentences Message-ID: Isn't there a problem with deciding what is and isn't a sentence, even before we discuss their lengths? In writing, sentences are generally denoted by a period (full stop), but in speaking, it is less clear. If an utterance trails off with a "and then..." and then silence, is this the end of a "sentence" or just the end of an utterance? If an entire spoken narrative, lasting perhaps 15 minutes, is linked with "and then," "and uh," "and so," "so then," etc. with each clause (information chunk) linked to the next in a loose way, where are the sentence boundaries? Can we say that the entire story is a single sentence? Am I overlooking something obvious? In what sense is the length of "sentences" a decidable issue (let alone a cognitively relevant one)? Eleanor Olds Batchelder From amnfn at WELL.COM Sun Jun 28 03:40:17 1998 From: amnfn at WELL.COM (A. Katz) Date: Sat, 27 Jun 1998 20:40:17 -0700 Subject: No subject Message-ID: Eleanor Olds Batchelder wrote: >Isn't there a problem with deciding what is and isn't a >sentence, even before we discuss their lengths? In writing, >sentences are generally denoted by a period (full stop), but in >speaking, it is less clear. If an utterance trails off with a >"and then..." and then silence, is this the end of a "sentence" >or just the end of an utterance? If an entire spoken narrative, >lasting perhaps 15 minutes, is linked with "and then," "and uh," >"and so," "so then," etc. with each clause (information chunk) >linked to the next in a loose way, where are the sentence >boundaries? Can we say that the entire story is a single >sentence? Language has been around longer than writing, but writing has been around considerably longer than punctuation. And a basic unit (often thought of as a sentence) is a functionally relevant factor in comprehension and processing of ancient texts, as well as modern day spoken utterances. The Old Testament in the original Hebrew is not punctuated. It is broken into verses, but the verses are not necessarily coterminus with sentence boundaries. Sometimes a sentence ends in the middle of a verse. Sometimes a sentence continues into the next verse. If the reality of sentences were only a question of arbitrarily marked punctuation as a formal literary device, then we could dispense with identifying sentence boundaries. But in fact, where the sentence boundary is has implications for comprehension. As a speaker of Hebrew, I instinctively identify where the sentence breaks are. When I read aloud, I pause there. Other prosodic cues are also involved, such as sentence intonation. When I taught Biblical Hebrew, sometimes beginning students who were reading a verse had difficulty identifying the sentence break in the middle, and I had to point it out to them. After I had done so, they were able to understand the verse. Before it was pointed out, they had trouble parsing out the grammatical roles. (Despite the fact that Hebrew is a highly inflected language.) How do native speakers identify sentence breaks in unpunctuated written texts? Through grammatical marking, contextual cues and repeating stylistic patterning. How do we identify sentence breaks in spoken language? Through intonational patterning, pauses, extralinguistic cues --- plus all of the above. Do people ever get confused about where the sentence break might be? Sure. But the confusion merely highlights the cognitive signifcance of the sentence break as an information bearing variable. As in the old standby: "No don't stop" interpreted variously as "No! Don't! Stop!" or "No, don't stop!" Of course, you have a point that merely adding a conjunction to the beginning of every sentence does not create a one sentence text. (The Old Testament could be interpreted that way in many places, but we all know that those aren't real conjunctions. They're discourse markers and temporal inverters.) The real measure of a sentence from a cognitive perspective is not determined on strictly formal grounds. Speakers give ample clues of what their real processing units are by body language and prosody -- and those clues do not always agree with what a formal grammarian might have told us about where the sentence begins and ends. Punctuation in modern writing is an idealization of a cognitive phenomenon. But from a communicative perspective, the sentence as a basic unit is very real. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From chafe at HUMANITAS.UCSB.EDU Sun Jun 28 23:30:17 1998 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Sun, 28 Jun 1998 16:30:17 -0700 Subject: Novelty vs. Expandability (of What?) Message-ID: I'd like to express some support for Doug Oliver and Eleanor Batchelder. Sorry this has to be long. It's a good idea at the beginning, I think, to separate the issues of (1) novelty, (2) expandability (how long or complex "sentences" or whatever can be), and (3) the question of what kinds of units are involved anyway (should we always be talking about sentences?). So far as novelty is concerned, in the middle of May I heard someone say "Frank Sinatra died." I don't believe I had ever heard that sentence before, and yet I knew what it meant (up to a point). And I suppose that the person who uttered it had never said it before (at least before that day), and yet she was able to produce it without difficulty (or at least without grammatical difficulty). It was novel, but hardly very long or complex. There doesn't seem to be any marvelous new insight here. Why would anyone find anything surprising about this kind of novelty? If I understand what Esa Itkonen was getting at with his original question, it was that the idea expressed as "Frank Sinatra", the idea of dying, and the way those ideas were put together to say "Frank Sinatra died" were all totally familiar to speakers of English. The only thing that was new was their combination. There's one more thing that should be said regarding novelty. There's an important sense in which just about everything people say is novel, simply because every human experience is unique. If I say, for example, "I saw a deer this morning," I may have used exactly that sentence (as a sequence of words etc.) before, but on each occasion the experience communicated was different: a different mental image, a different deer, perhaps a different emotion, probably a different location, and certainly a different time. We have to plug each unique experience into our limited linguistic resources in order to talk, and to some extent in order to think (and, yes, we do that differently in different languages). The point is that the experiences underlying language are always in some respects novel, even though the form of the language may be the same. Such considerations aside, if examples like "Frank Sinatra died" were the whole story, one could think of language as providing a number of patterns and a (very large) number of lexical items that could be inserted into those patterns. It's not that one learns language just by memorizing sentences, although I'm sure that happens too. People don't get enough credit for their huge memories, which certainly extend beyond individual words. But of course one also learns patterns and lexical items and is able to combine them. Although those patterns and lexical items can change and be augmented over time, at any one moment in an individual's life the number of possible combinations is vast, but finite. Plenty of "novel sentences" are easily available without much fancy footwork. What Chomsky came up with in the 1950s was the idea that the patterns could be expanded without limit, and not diachronically but synchronically. At that point there was already a certain gap between theory and observation. While the kind of novelty illustrated by "Frank Sinatra died" is intuitively obvious and can in fact be observed all around us, the kind of novelty provided by infinite recursion is by definition impossible to observe. What do we find when we examine how people actually speak? We find for one thing that sentences are something of a problem, if nothing else because prosody and syntax don't always coincide. Ignoring that problem, we do find that sentences, whatever they may be, vary from very short to very long. My own finding (and I've looked at a lot of ordinary speech with this in mind) has been that people insert sentence boundaries whenever they decide (on the fly, often for some passing reason) that some kind of closure has been reached in the flow of ideas. It's an on-line decision and, judging from repetitions of the same content by the same person on different occasions (a very worthwhile kind of data to examine), sentences don't seem to, or need not, reflect units of mental storage. On-line decisions about closure are interesting, but there's more to language structure than that. Sentences are intermediate in length between smaller prosodic phrases (expressing foci of active consciousness) and expressions of larger discourse topics (with material in peripheral consciousness), both of which are subject to interesting cognitive constraints that don't apply to sentences per se. Prosodic phrases ("intonation units") are subject to what I've called the one-new-idea constraint, which keeps them from getting very big. I think it has a much more important effect on the shape of language than George Miller's 7 +/- 2 constraint, as I've tried to show in numerous places. Topics may be short or long, but the interesting thing is that, once a topic has been opened in a conversation, there's an expectation that it will sooner or later be closed, after which another one can begin. Opening a topic is like creating an open parenthesis that demands eventual closure. Topics are what keep language moving. There's a great deal to be said about this, but here I might just point out here that the ludic analogy is more relevant to topics than to sentences. The length of a tennis game seems especially apt. Leaving aside the prolongation of a game through repeated deuces, how many times can the ball cross the net before a point is scored? Limits on skill and stamina would seem to keep the number within asymptotic bounds, but any arbitrary limit might in theory always be extended by one. Topics are like that. No topic goes on all day, but it's impossible to assign anything but an arbitrary limit to topic size. Sentences are usually properly contained within a topic, but on rare occasions they may expand to be coextensive. In terms of clauses, that can happen in a trivial way through the use of "and" to link every clause. Prosodically it can be done by postponing a falling pitch until the topic is concluded. I've observed this with 10-year-old boys, when they repeat the currently popular question intonation at the end of every phrase before finally letting their pitch fall when I'm about ready to go home. I found it also with a couple of our "pear stories", where the film was described with what sounded like a shopping list of events that didn't end until the narrative was finished. My general plea is that we distinguish novelty from expandability, and that we move beyond the rather special and sometimes puzzling strings of words that have been called "sentences," as if they were all that language had to offer, to a broader concern for the richness of what happens when people actually speak. --Wally Chafe From amnfn at WELL.COM Mon Jun 29 13:50:24 1998 From: amnfn at WELL.COM (A. Katz) Date: Mon, 29 Jun 1998 06:50:24 -0700 Subject: No subject Message-ID: I agree with the general purport and spirit of Wally Chafe's remarks, but I'd like to comment on the context of this specific debate. It is undoubtedly true that the sentence is not the be all and end all of language, and there are many other aspects to explore that are perhaps much more interesting. There are even speakers who make very scant use of this particular linguistic unit. But sentences do have communicative reality, and every once in a while it's a good idea to remind ourselves of this rather basic fact. With all the revisionism of recent years, there are actually new linguists coming into the field who may believe that sentences are a totally arbitrary unit devised by formalists to confound us. The sentence is a classic concept, and along with other ancient artifacts, it may not be very fashionable at the moment. I'd like to draw an analogy from poetry. The modernist movement has left behind metrical form and eschews rhyme. Many readers are encouraged to assume that what distinguishes poetry from prose is how the words are arranged on a page -- just as many laymen are led to believe that you know a sentence is over when you get to the period. The fallacy in such a position was brought home to me one day as a child when in the middle of a novel, which was written in prose, I suddenly stumbled onto a poem embedded into a paragraph. There was nothing in the way the thing was typeset or arranged on the page to indicate that it wasn't just another chunk of prose. There were no line breaks, just sentences ending in periods, followed by more sentences ending in periods -- but the thing scanned and rhymed and I was amazed, because it cried out to me: "I'm a poem!" I felt the meter; I could have told you where the line breaks should have gone -- and I suddenly realized that how it looks on the page has nothing to do with whether it's poetry or prose. The overwhelming reality of the sentence as a unit was brought home to me when I started teaching beginning language courses. I saw firsthand how without a basic understanding of the language, students could not recognize sentence boundaries. And without the sentence boundaries, they could not decipher the propositional value of an utterance, even when they recognized all the words. But what happens when people actually speak? When you go out into the field and record what people are really doing, isn't that when the scales fall from your eyes and you realize that sentences are just an illusion? Some speakers slur their speech so badly that even word boundaries are very hard to make out. Some never finish a sentence, but let it trail off, leaving it up to their interlocutor to complete the utterance. There are those with conflicting prosodic and grammatical cues as to where the sentence ends. And yes, some informants when asked to relate a story will give you a laundry list, instead. So what? Whoever said we all had to be equally good at it? Well, maybe Chomsky, with his notions of absolute native speaker competence. But there's no functionalist principle to suggest absolute equality of facility with language. Ngoni Chipere apparently has experimental data to show that native speakers do not necessarily out-perform foreigners in deciphering complex novel sentences. And if language use is related to generalized processing ability, there's no reason to suppose that they should. Fieldworkers know that not every native speaker informant is equally good. And the informants can tell you that themselves. They recognize who is a more eloquent speaker among them or a better storyteller. That, I think, is the ultimate measure of the reality of any linguistic unit: not whether every speaker makes use of it, but whether other speakers find it easier to understand those who do. There are normal, healthy, intelligent people in every community who are nevertheless incapable of completing a sentence. They get along fine, because there's a lot more to human communication than propositional value coded on the sentence level. But other speakers invariably find it much easier to understand those who enunciate clearly, produce complete sentences and use prosodic cues to mark sentence breaks. I fully agree that we don't have to confine our inquiry as linguists to the sentence level and that there are many discourse related issues that are far more interesting. I think that we should also agree that it's okay to talk about sentences some of the time. They have as much reality as any other unit. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From cgenetti at HUMANITAS.UCSB.EDU Mon Jun 29 16:51:17 1998 From: cgenetti at HUMANITAS.UCSB.EDU (Carol Genetti) Date: Mon, 29 Jun 1998 09:51:17 -0700 Subject: sentences Message-ID: Dear funknetters, Pardon me for coming in on this discussion late -- with a six month old baby and two students about to leave for their first field trip to Nepal, I've been deleting list mail without reading it. BUT, I'm very interested in the notion of sentence and would like to put in my two cents worth. I work on Indo-Aryan and Tibeto-Burman languages of the Himalayas. These languages are all verb-final, and have the standard features that correlate with verb-final order, especially "clause chaining" or "converb constructions" (whether or not these are really separate is an interesting question). In the languages I have looked at extensively, Dolakha Newar and Nepali, sentence boundaries are quite clear, marked by the presence of a finite verb, after which may be a number of discourse particles and/or postposed elements. The structure of these sentences may be quite complex, entailing multiple levels of embedding, as well as chaining structures. It is clear that the sentences are unified wholes and that they are significant syntactic units that speakers attend to and manipulate as they produce spoken discourse. I've written two papers (both in press) which examine different issues of sentencehood. (Both papers examine narrative -- conversational data is very interesting, and units are more likely to be left incomplete -- I still think that the notion of sentence is relevant there as well.)Both papers give a lot of syntactic argumentation for what a sentence is, particularly with reference to embedded quotation. The two papers haVE different emphases, goals, and languages. In one, co-authored with Keith Slater, we looked at sentences, clause boundaries and intonation. We propose that there are "prosodic sentences", consisting of a series of prosodic units with non-final intonation, and ending in a unit with final intonation (analogous to clause-chaining structures). Prosodic sentences may be internally complex and involve embedding. Prosodic sentences and syntactic sentences generally co-terminate, but there a number of other interesting patterns as well. We also use the term "narrative sentence" which seem to be the units speakers most clearly delineate, but which may not have final marking at either the prosodic or syntactic levels. This paper contains a complete narrative (a folk rendition of the beginning of the Mahabharata) intonationally transcribed, glossed, and extensively annotated. If anyone wants a copy, just let me know! A second paper which concerns the sentence I co-authored with Laura Crain. It looks at issues of preferred argument structure in Nepali, and demonstartates that the amount and type of nominal reference is based not on the clause in this language, but on the sentence. We found that speakers have a preference to make one overt mention of each referent one time in a sentence, regardless of the number of times the referent occurs as a verbal argument in the sentence (and regardless of the discourse prominence of the referent). Thus sentences are key units that speakers are aware of and manipulate. This paper backs up the central claims of PAS theory, namely that grammar and discourse patterns are complementary, and shows that the typological facts of Nepali favor the discourse patterns found. This paper is to appear in Du Bois et al. Again, if anyone is interested in obtaining a copy, just let me know. So, regarding the relevance of the sentence, I think language typology is a significant factor. Languages differ considerably in their syntactic patterns, and the relevance of any syntactic unit, in particular the sentence, will vary with the typology. -- Carol Genetti From chafe at HUMANITAS.UCSB.EDU Tue Jun 30 19:33:26 1998 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Tue, 30 Jun 1998 12:33:26 -0700 Subject: Sentences Message-ID: I'm afraid I introduced a red herring. I don't want to be remembered as a nonbeliever in sentences, and I didn't intend to elicit messages to the effect that "my language does have sentences." I did want to raise a question regarding novelty, and particularly the notion that novelty can be explained only in terms of infinite recursion within sentence structure. Along the way I suggested that there may be other elements in language that have interesting constraints too, and that those constraints may interact with, and perhaps be different in nature from constraints on sentences. I suspect that this subject is too ramified to be discussed adequately in e-mail messages. --Wally Chafe From darnell at CSD.UWM.EDU Mon Jun 1 13:43:49 1998 From: darnell at CSD.UWM.EDU (Michael Darnell) Date: Mon, 1 Jun 1998 08:43:49 -0500 Subject: UWM conference program Message-ID: THE 24TH UNIVERSITY OF WISCONSIN-MILWAUKEE LINGUISTICS SYMPOSIUM September 10-12, 1998 DISCOURSE ACROSS LANGUAGES AND CULTURES *GENERAL INFORMATION*: - TIME: September 10-12, Thursday, Friday, and Saturday: Thursday, 9:15-5:45 Friday, 8:30-6:00 Saturday, 9:00-6:15 There will be social gatherings both Friday and Saturday night. - PLACE: The meeting will take place in the Union Building on the University of Wisconsin-Milwaukee campus (2200 E. Kenwood Boulevard, Milwaukee, WI). All sessions will be held in the Wisconsin Room. - REGISTRATION: Registration will begin on Thursday at 8:00 a.m. and will continue through Friday (please contact us about Saturday-only registration). - CONFERENCE HANDBOOK: For those who cannot attend, the conference handbook, which includes all the abstracts for presented papers, will be available for purchase by mail. For information on this, or for other questions contact Mike Darnell at darnell at csd.uwm.edu *PROGRAM* *THURSDAY* 9:15-9:30 Welcome 9:30-10:15 Carol MODER (Oklahoma State University) TBA 10:15 -11:00 Robert LONGACRE (U.of Texas at Arlington) TBA 11:00-12:30 Lunch 12:30-1:00 Aneta PAVLENKO(Cornell University) Narrative Construction: Cross-linguistic and Cross-cultural Perspective 1:00-1:30 Mary DIGENNARO-SEIG(Oklahoma State U.) Episodic Boundaries in Japanese and English Narratives 1:30-2:00 Tania GASTAO SALIES(Pontificia Universidade Catolica) Texts as Image-Schemas: The Communicative Text 2:00-2:30 Michael BARLOW (Rice University) Parallel Concordancing and Contrastive Discourse 2:30-2:45 Break 2:45-3:15 Ingrid PILLER (Universitat Hamburg) Topic Control in Bilingual Couples' Conversations 3:15-3:45 Patricia MAYES (U. of California, Santa Barbara) Genre as a Locus of Cultural Ideology: A Comparison of Japanese and American Cooking Classes 3:45-4:15 Paul Kei MATSUDA (Purdue University) Negotiation of Identity in a Japanese On-Line Discourse Community 4:15-4:45 Brad DAVIDSON (Stanford University) The Medical Interpreter as the Site of Cross- cultural Interpretation 4:45-5:00 Break 5:00-5:45 Ronald SCOLLON (Georgetown University) TBA *FRIDAY* 8:30-9:15 Susanna CUMMING (U. of California, Santa Barbara) TBA 9:15-9:30 Break 9:30-10:00 JoAnne NEFF (Universidad Complutense) Contrastive Discourse Analysis: Argumentative Text in English and Spanish 10:00-10:30 Finn FRANDSEN & Winni JOHANSEN (The Aarhus School of Business) When Arguments Turn Green: Discourse, Genre, and Culture in "Green" Marketing Communications in France and Denmark 10:30-11:00 Elizabeth ARCAY HANDS & Ligia COSSE (Universidad de Carabobo) Multidimensional Analysis of Academic Essays Written in Venezuelan Spanish and British and North-American English by Monolingual and Bilingual Scholars 11:00-11:30 James J. MULLOOLY (Columbia University) Reading English in Arabic: Applied Contrastive Rhetoric 11:30 to 1 Lunch 1:00-1:30 Lafi ALHARBI (Kuwait University) Rhetorical Transfer Across Cultures: English into Arabic and Arabic into English 1:30-2:00 Christine GEOFFROY(The University of Technology) The English and the French: Cross-cultural Discourse Analysis in a Business Environment 2:00-2:30 Julia LAVID & Maite TABOADA (Universidad Complutense)Stylistic Differences in Document Design Across Languages in Europe: A Cross- cultural Comparison 2:30-2:45 Break 2:45-3:15 Erica HOFMANN KENCKE(U. of Texas at Austin) Making Understanding Understood: Another Challenge for Non-native Speakers 3:15-3:45 Masako TAMANAHA (U. of California, Los Angeles) Interlanguage Apologies by American learners of Japanese: A Comparison with Native Speakers of Japanese 3:45-4:15 Euen HYUK JUNG (Georgetown University) Apologies from a Cross-cultural Perspective 4:15-4:30 Break 4:30-5:15 Dan SLOBIN (U. of California, Berkeley) TBA 5:15-6:00 William EGGINGTON (Brigham Young University) TBA *SATURDAY* 9:00-9:45 Ruth BERMAN (Tel-Aviv University) TBA 9:45-10:00 Break 10:00-10:30 Hikyoung LEE (University of Pennsylvania) Discourse Marker Use in Native and Non-native English-speaking Korean Americans 10:30-11:00 Janet M. FULLER (Southern Illinois U.) Discourse Markers in Codeswitching and Borrowing: a Bridge between Language and Cultures 11:00-11:30 Suzanne FLEISCHMAN(U. of California, Berkeley) Discourse Markers Across Language? Evidence from French and English 11:30-1:00 Lunch 1:00-1:45 Sonja TIRKONNEN-CONDIT (University of Joensuu) TBA 1:45-2:00 Break 2:00-2:30 John K. HELLERMANN (U. of Wisconsin, Madison) Prosody and Transistion-relevance-spaces in English and Hungarian Conversations 2:30-3:00 Rebecca DAMRON (The University of Tulsa) Prosody in Urdu and Pakistani English Conversational Discourse 3:00-3:30 Laura Hsiu-min LIU (National Taiwan University)A Cross-Linguistic Analysis of Pause Markers in Spoken Chinese and Seediq 3:30-3:45 Break 3:45-4:15 Maite TABOADA (Universidad Complutense) Rhetorical Structure Theory in Dialogue: A Contrastive Analysis 4:15-4:45 Ivo SANCHEZ (U. of California, Santa Barbara) Spontaneous Rhetoric: Lists in English and Spanish Conversation 4:45-5:15 Amy MEEPOE (U. of California, Los Angeles) & Makoto HAYASHI (U. of Colorado, Boulder) Formulating Person Reference in Thai and Japanese: A Cross-linguistic Study of 'Zero' Pronouns in Conversation. 5:15-5:30 Break 5:30-6:15 Wallace CHAFE (U. of California, Santa Barbara) TBA From JMARIN at SR.UNED.ES Mon Jun 1 19:06:42 1998 From: JMARIN at SR.UNED.ES (Juana Marin, UNED, SPAIN) Date: Mon, 1 Jun 1998 14:06:42 -0500 Subject: AELCO-SCOLA E-mail List Message-ID: ************************************************************ SORRY IF YOU RECEIVE THIS MESSAGE MORE THAN ONCE ************************************************************ AELCO-SCOLA E-MAIL LIST The Spanish Cognitive Linguistics Association (AELCO-SCOLA) has recently started an e-mail list called LingCog. It is open to anybody who might be interested in keeping informed about cognitive linguistics in Spain. You may subscribe to LingCog from your own e-mail account by using the following information: To: Majordomo at fil.ub.es Subject: (Anything) Body: subscribe lingcog Thank you, Joseph Hilferty From mackenzi at LET.VU.NL Tue Jun 2 10:26:19 1998 From: mackenzi at LET.VU.NL (J.L. Mackenzie) Date: Tue, 2 Jun 1998 10:26:19 MET Subject: Eighth International Conference on Functional Grammar Message-ID: The Eighth International Conference on Functional Grammar (ICFG8) is being held at the Vrije Universiteit Amsterdam (Netherlands) from July 6th through 9th. For full details, including the conference program and abstracts of all papers, as well as information on travel and accommodation, see: http://www.mis.coventry.ac.uk/FGIS/8thICFG.html To register as a participant in the conference, please contact: icfg8 at let.vu.nl Lachlan Mackenzie Free University Amsterdam From JMARIN at SR.UNED.ES Tue Jun 2 11:34:08 1998 From: JMARIN at SR.UNED.ES (Juana Marin, UNED, SPAIN) Date: Tue, 2 Jun 1998 06:34:08 -0500 Subject: (Fwd) AELCO-SCOLA E-mail List Message-ID: ------- Forwarded Message Follows ------- Date: Thu, 28 May 1998 14:44:11 -0700 (PDT) From: Joseph Hilferty To: cogling at ucsd.edu Subject: AELCO-SCOLA E-mail List Reply-to: Joseph Hilferty ************************************************************ SORRY IF YOU RECEIVE THIS MESSAGE MORE THAN ONCE ************************************************************ AELCO-SCOLA E-MAIL LIST The Spanish Cognitive Linguistics Association (AELCO-SCOLA) has recently started an e-mail list called LingCog. It is open to anybody who might be interested in keeping informed about cognitive linguistics in Spain. You may subscribe to LingCog from your own e-mail account by using the following information: To: Majordomo at fil.ub.es Subject: (Anything) Body: subscribe lingcog Thank you, Joseph Hilferty From clements at INDIANA.EDU Tue Jun 2 19:32:14 1998 From: clements at INDIANA.EDU (J. Clancy Clements (Kapil)) Date: Tue, 2 Jun 1998 14:32:14 -0500 Subject: studies on the acquisition of possessive pronouns/adjectives (fwd) Message-ID: I'm looking for studies on the acquisition of possessive pronouns/adjectives in any language (L1 or L2). If there's interest, I'd be happy to make a list of the references I receive for Funknet. Clancy Clements From bralich at HAWAII.EDU Fri Jun 5 01:18:43 1998 From: bralich at HAWAII.EDU (Philip A. Bralich, Ph.D.) Date: Thu, 4 Jun 1998 15:18:43 -1000 Subject: 3-D/NLP Demo Message-ID: To help demonstrate the possibilities and the sheer fun of NLP with 3-D animations we have just produced a demo product using characters from Haptek Technologies. The software package allows you to chat with an alien. The main point is to input statements you want to query such as You saw the tall dark stranger in the park. The tall dark stranger was carrying a knife. Then you can ask things like, "What was the stranger doing?" "Where did you see the stranger?" And then get the answer straight from the alien's lips. The graphics and speech generation technology from Haptek are very nice and make this a very pleasurable intro to the future marriage of edutainment and NLP. Using this format of course you could input an entire murder mystery. The guys who write scripts for muds and moos could probably get a hundred stories for this one character alone. It is available from the download section of our web site at http://www.ergo-ling.com. Great Fun! Phil Bralich Philip A. Bralich, Ph.D. President and CEO Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 Tel: (808)539-3920 Fax: (808)539-3924 Philip A. Bralich, President Ergo Linguistic Technologies 2800 Woodlawn Drive, Suite 175 Honolulu, HI 96822 tel:(808)539-3920 fax:(880)539-3924 From meira at RUF.RICE.EDU Sat Jun 6 04:07:32 1998 From: meira at RUF.RICE.EDU (Sergio Meira S.C.O.) Date: Fri, 5 Jun 1998 23:07:32 -0500 Subject: studies on the acquisition of possessive pronouns/adjectives (fwd) In-Reply-To: Message-ID: Dear Funknetters, I was recently told that 'historical-comparative linguistics tends to reconstruct past order from present disorder', so that we always get a nicer, more orderly picture of grammar in the past than in the present. Would anyone happen to know of any references on the topic of whether more disordered states can be reconstructed (e.g. unpredictable alternations, irregularities in form/distribution of morphemes, semantic unpreditability etc.), especially when the present-date state is more orderly? Sergio Meira meira at ruf.rice.edu From Ziv at HUM.HUJI.AC.IL Sun Jun 7 01:05:00 1998 From: Ziv at HUM.HUJI.AC.IL (Ziv Yael) Date: Sat, 6 Jun 1998 18:05:00 -0700 Subject: Call for Papers+please publicise Message-ID: ------------------------------------------------------------------------------ FORWARDED FROM: Ziv Yael Return-Path: Date: Fri, 5 Jun 1998 01:06:00 +0300 (IDT) Message-Id: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" To: dascal at spinoza.tau.ac.il, elda at bgumail.bgu.ac.il, hbzs22 at post.tau.ac.il, jonathan at research.haifa.ac.il, manor at ccsg.tau.ac.il, mariel at ccsg.tau.ac.il, mskcusb at pluto.mscc.huji.ac.il, sarfati at ccsg.tau.ac.il, shir at bgumail.bgu.ac.il, tamark at construct.haifa.ac.il, ziv at hum.huji.ac.il From: anatbi at post.tau.ac.il (Anat Biletzki) Subject: Call for Papers Call for Papers PRAGMA99 International Pragmatics Conference on PRAGMATICS AND NEGOTIATION June 13-16, 1999 Tel Aviv University and Hebrew University of Jerusalem Tel Aviv and Jerusalem Israel The main theme of this conference is the pragmatics of negotiation, interpreted in a very broad sense. Interlocutors engage in negotiations about every aspect of their interaction - such as floor access and topic selection, contextual assumptions, conversational goals, and the (mis)interpretation and repair of their messages. Topics such as cross-cultural and cross-gender (mis)communications, conversational procedures in disputes and collaborations, argumentation practices, and effects of assumptions and goals on the negotiating strategies of interlocutors are of special interest for this conference. The conference will be interdisciplinary, bringing together pragmaticists, linguists, philosophers, anthropologists, sociologists and political scientists. We are soliciting papers on all issues relevant to the theme of the conference, as well as papers in other areas of pragmatics and dialogue analysis. The conference will include plenary addresses, regular session lectures, and organized panels around any of the relevant topics. Among the plenary speakers: Elinor Ochs (UCLA), Itamar Rabinovitch (Tel Aviv University), Emanual Schegloff (UCLA), Thomas Schelling (University of Maryland), Deborah Schiffrin (Georgetown University), Deborah Tannen (Georgetown University), Ruth Wodak (University of Vienna). Presentation of regular session lectures is 30 minutes long, with a subsequent discussion of 10 minutes. Panels take the form of a series of closely related lectures on a specific topic, which may or may not be directly related to the special topic of the conference. They may consist of one, two or three units of 120 minutes. Within each panel unit a maximum of four 20-minute presentations are given consecutively, followed by a minimum of 30 minutes of discussion (either devoted entirely to an open discussion, or taken up in part by comments by a discussant or discussants). Panels are composed of contributions attracted by panel organizers, combined with individually submitted papers when judged appropriate by the Program Committee in consultation with the panel organizers. Typically, written versions or extensive outlines of all panel contributions should be available before the conference to facilitate discussion. SUBMISSIONS Abstracts for papers and panels should be submitted in the following format: 1. For papers - five copies of an anonymous abstract (up to 300 words). 2. For panels - a preliminary proposal of one page, detailing title, area of interest, name of organizer(s) and invited participants to be sent by August 1, 1998. Organizers of approved panels will then be invited to submit a full set of abstracts, including: a. a brief description of the topic area, b. a list of participants (with full details, see below), c. abstracts by each of the participants by November 1, 1998. 3. In all cases, a page stating: a. title, b. audiovisual/computer request, and c. for each author: I. Full name and affiliation; II. Current address; III. E-mail address; IV. Fax number. Deadline for submission of abstracts: Nov. 1, 1998. Abstracts may be sent by hard copy, disk, or e-mail to Pragma99, Faculty of Humanities, Tel Aviv University, Tel Aviv 69978, ISRAEL. E-mail: pragma99 at post.tau.ac.il Date of notification: March 1, 1999. PROGRAM COMMITTEE: Mira Ariel, Hava Bat-Zeev Shyldkrot, Jonathan Berg, Anat Biletzki, Shoshana Blum-Kulka, Marcelo Dascal, Nomi Erteschik-Shir, Tamar Katriel, Ruth Manor, George-Elia Sarfati, Elda Weizman, Yael Ziv. ============================================================ PRAGMA99 REGISTRATION FORM Please send the following information, accompanied by cheque payable to Tel-Aviv University in the amount of US$75 if paid before November 1, 1998, otherwise US$100, to Pragma99 Faculty of Humanities Tel Aviv University Tel Aviv 69978, ISRAEL Dr./Mr./Mrs./Ms./ Name:__________________________ Address:_______________________________________________ University/Organization:___________________________________ Email:__________________________ Fax:____________________(Home)_______________(Office) Telephone:____________________(Home)_____________(Office) Signature:_____________________ Date:________________ Those wishing to pay by credit card should provide the following information: Type of Credit Card: Mastercard/Visa/American Express Name as it appears on Credit Card: Sum of Paymnt: US$__________ Card No.________________________ Expiration Date: __________________ Date:_______________ Signature: _____________________ ********** Those wishing to present a paper should follow the instructions above. Hotel information will be provided after registration. The International Association for Dialogue Analysis is co-sponsoring a part of our conference, which will be devoted to "Negotiation as a Dialogic Concept." For further information, contact Edda Weigand (e-mail: weigand at uni-muenster.de). ============================================================ [Forms can also be returned by fax to 972-3-6407839, or by e-mail to pragma99 at post.tau.ac.il . ] From spikeg at OWLNET.RICE.EDU Fri Jun 12 11:24:57 1998 From: spikeg at OWLNET.RICE.EDU (Spike L Gildea) Date: Fri, 12 Jun 1998 06:24:57 -0500 Subject: Symposium on Ideophones (fwd) Message-ID: ---------- Forwarded message ---------- From: "Dr. phil. Christa Kilian-Hatz, M.A." Dear Colleagues, the Institute fo Africa Linguistics, Un iversity of Cologne, is organizing a SYMPOSIUM on IDEOPHONES here in Cologne on January 24-28, 1999. While the central theme of the symposum is ideophones in African languages, we are hoping to extend our investigation well beyond the geographical confines of that continent. Anyone interest in the topic is encouraged to contact: F.K. Erhard Voeltz or Christa Kilian-Hatz at this e-mail address/or Institut fuer Afrikanistik Universitaet zu Koeln D-50923 Koeln Cologne/Germany From BFORD at BLACKWELLPUBLISHERS.CO.UK Tue Jun 16 09:30:14 1998 From: BFORD at BLACKWELLPUBLISHERS.CO.UK (Ford Beck) Date: Tue, 16 Jun 1998 10:30:14 +0100 Subject: SYNTAX - a journal of theoretical, experimental and interdiscipli nary research Message-ID: > I am delighted to announce the launch of SYNTAX, a new international > peer-reviewed journal focusing on all areas of syntactic research. > > SYNTAX aims to unite related but often disjointedly represented areas > of syntactic inquiry together in one publication. Within a single > forum SYNTAX will accommodate both the explosive growth and increased > specialization in the field of syntax. > > Free Sample Copy Available > To order your free sample copy, please reply to > egilling at blackwellpublishers.co.uk with 'SYNTAX-SAMPLE COPY REQUEST 7' > in the subject line and your full name and postal address in the > message. > > Special Offer - Electronic Access is included in your institutional > subscription to the print edition. > > Editors: > Samuel D. Epstein, University of Michigan, USA > Suzanne Flynn, Massachusetts Institute of Technology, USA > > Topics covered by SYNTAX include: > * Syntactic theory > * Syntactic interface with morphology, phonology and semantics > * First language acquisition > * Second language acquisition > * Bilingualism > * Learnability theory > * Computational linguistics > * Neurolinguistics > * Philosophy of mind > * Pragmatics, discourse models > * Parsing > * Parsing-syntactic interface > > Articles in the First Volume include: > > * V-Positions in West Flemish, Liliane Haegeman > * Movement and Chains, Norbert Hornstein > * Logical Problem of Language Change, Parthi Niyogi & Robert Berwick > * The Typology of WH-Movement: WH Questions in Malay, Peter Cole & > Gaby Hermon > > Other 1998 Contributors: *Noam Chomsky *Janet Fodor *Mark Hale > *Richard Kayne *Barbara Lust *Reiko Mazuka *James McCloskey & Sandy > Chung *Jean-Yves Pollock *Edward Stabler *Hoskuldur Thrainsson & > Jonathan Bobaljik *Esther Torrego > > For further information on SYNTAX, visit: > http://www.blackwellpublishers.co.uk/asp/journal.asp?ref=13680005 > > ISSN: 1368-0005, Published in April, August and December, Volume 1, > 1998 > > Institutional Subscription Rates: ?79.00 (UK/Rest of World), $125.00 > (N America) > Personal Subscription Rates: ?29.00 (UK/Rest of World), $45.00 (N > America) > > Best wishes, > > Emily Gillingham > Senior Marketing Controller > Blackwell Publishers Ltd > 108 Cowley Road > Oxford, OX4 1JF > UK > > Tel: +44 (0) 1865 382265 > Fax: +44 (0) 1865 381265 > Email: egilling at blackwellpublishers.co.uk > http://www.blackwellpublishers.co.uk > > From spikeg at OWLNET.RICE.EDU Wed Jun 17 11:24:16 1998 From: spikeg at OWLNET.RICE.EDU (Spike L Gildea) Date: Wed, 17 Jun 1998 06:24:16 -0500 Subject: LSA Bulletin available on the web (fwd) Message-ID: Date: Tue, 16 Jun 1998 15:04:31 +0100 From: LSA The June 1998 LSA Bulletin (No. 160) is now available at the Society's website (http://www.lsadc.org). From shelli at BABEL.LING.NWU.EDU Wed Jun 17 20:14:19 1998 From: shelli at BABEL.LING.NWU.EDU (Michele Feist) Date: Wed, 17 Jun 1998 15:14:19 -0500 Subject: animacy and spatial terms Message-ID: Greetings, I'm a graduate student at Northwestern University working on the semantics of spatial prepositions. I've started to investigate the kinds of factors about a scene that influence the use of these terms, and I've become interested in the effect of animacy. My question for the list: Have you (or anyone you know of) investigated the effect of either the animacy of the object located (Talmy's Figure) or or the animacy of the reference object (Talmy's Ground) on the use of spatial relational terms? Thanks in advance for any help, Michele Michele Feist Department of Linguistics Northwestern University 2016 Sheridan Road Evanston, IL 60208 m-feist at nwu.edu From lakoff at COGSCI.BERKELEY.EDU Thu Jun 18 04:40:21 1998 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Wed, 17 Jun 1998 21:40:21 -0700 Subject: animacy and spatial terms Message-ID: There's a huge literature on spatial terms in cognitive linguistics. What kinds of examples were you thinking of? George >Greetings, > >I'm a graduate student at Northwestern University working on the semantics >of spatial prepositions. I've started to investigate the kinds of factors >about a scene that influence the use of these terms, and I've become >interested in the effect of animacy. > >My question for the list: Have you (or anyone you know of) investigated >the effect of either the animacy of the object located (Talmy's Figure) or >or the animacy of the reference object (Talmy's Ground) on the use of >spatial relational terms? > >Thanks in advance for any help, > >Michele > >Michele Feist >Department of Linguistics >Northwestern University >2016 Sheridan Road >Evanston, IL 60208 > >m-feist at nwu.edu From delancey at OREGON.UOREGON.EDU Thu Jun 18 15:43:15 1998 From: delancey at OREGON.UOREGON.EDU (Scott Delancey) Date: Thu, 18 Jun 1998 08:43:15 -0700 Subject: animacy and spatial terms Message-ID: > My question for the list: Have you (or anyone you know of) investigated > the effect of either the animacy of the object located (Talmy's Figure) > or the animacy of the reference object (Talmy's Ground) on the use of > spatial relational terms? Not sure exactly what you're looking for here. There are languages (the ones I can think of offhand are Tibeto-Burman, but there are others) where you can't use the same locative construction with a human as with an inanimate Ground. (I'm not sure offhand what happens with non-human animates). I.e. you can say 'The child ran to the door' with a simple locative construction, but to say 'The child ran to his father' you need a relator noun construction, you can't just put the locative marker directly on 'father'. Is this any use to you? Scott DeLancey Department of Linguistics University of Oregon Eugene, OR 97403, USA delancey at darkwing.uoregon.edu http://www.uoregon.edu/~delancey/prohp.html From lakoff at COGSCI.BERKELEY.EDU Mon Jun 22 08:05:27 1998 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Mon, 22 Jun 1998 01:05:27 -0700 Subject: Invariance Principle Message-ID: >Iraide Ibarretxe wrote: >> >> Estoy trabajando con el Invariance Principle de Lakoff, y estaria muy >> agradecida si me pudierais facilitar bibliografia que trate sobre el mismo. > >Se me ocurre: > >Brugman, Claudia. 1990. What is the Invariance Hypothesis? > _Cognitive Linguistics_ 1(2): 257-266. > >Lakoff, George. 1990. The Invariance Hypothesis: Is Abstract > Reasoning Based Image-Schemas? _Cognitive Linguistics_ > 1(1): 39-74. > >_______. 1993. The Contemporary Theory of Metaphor. In > _Metaphor and Thought_, Andrew Ortony (ed.), 202-251. > Cambridge:Cambridge University Press. > >Turner, Mark. 1990 Aspects of the Invariance Hypothesis. > _Cognitive Linguistics_ 1(2): 247-255. > >_______. 1990. Poetry: Metaphor and the Conceptual Context > of Invention. _Poetics Today_ 11(3): 463-482. > >_______. 1991. _Reading Minds: The Study of English in the Age > of Cognitive Science_. Princeton, NJ: Princeton University > Press. > >________. 1992. Language is a Virus. _Poetics Today_ 13(4): > 725-736. > >________. 1996. _The Literary Mind_. Oxford: Oxford University > Press. > >Joe Hilferty >__________________________________________________________ >Home page: http://lingua.fil.ub.es/~hilferty/homepage.html Hi, I recommend taking a look at Srini Narayanan's dissertation, which can be found at the Neural Theory Of Language website, www.icsi.berkeley.edu/NTL. The course website that Jerry Feldman and I used for teaching an introductory course on NTL is www.icsi.berkeley.edu/mbrodsky/Cogsci110. In NTL, the Invariance Principle is unnecessary. Its effects arise automatically from the Neural Theory applied to metaphor. In Narayanan' s model, metaphorical mappings are neural connections allowing source domain inferences (acitivations in his model) to activate target domain structure. The two parts of the Invariance Principle follow automatically. Inferential structure is "preserved" since metaphorical inferences are done in the source domain. And what about apparent "overrides", where inherent target structure takes precedence when there is a possibility of contradiction? In the neural theory, contradiction is neural inhibition. Activations resulting from source domain inferenfces that are neurally inhibited by target structures simply never occur because they are neurally impossible. No extra "principle" is necessary. What makes the neural theory interesting is that it really uses embodiment. The basic claims is that abstract reason IS sensory-motor inference-done in the sensory-motor centers-with resulting activations projected to other parts of the brain by neural connections. Narayanan demonstrates in his dissertation that aspectual reasoning-reasoning about event structure-has the same inferential structure as motor control. This is about as dramatic a confirmation of the theory as neural modeling studies allow. Johnson and I will be discussing Narayanan's research in our book Philosophy In The Flesh, which will appear from Basic Books in early November. By the way, the original work leading up to the Invariance Principle was in Chapter 4 of More Than Cool Reason (Lakoff and Turner), 1989. Best wishes, George Lakoff From lakoff at COGSCI.BERKELEY.EDU Mon Jun 22 08:36:15 1998 From: lakoff at COGSCI.BERKELEY.EDU (George Lakoff) Date: Mon, 22 Jun 1998 01:36:15 -0700 Subject: animacy and spatial terms Message-ID: >On Wed, 17 Jun 1998, George Lakoff wrote: > >> There's a huge literature on spatial terms in cognitive linguistics. What >> kinds of examples were you thinking of? >> >> George > >I was wondering whether anyone had looked at possible effects on the use >of any spatial term due to either exchanging an inanimate Ground for an >animate Ground, or an inanimate Figure for an animate Figure, in a scene >that is otherwise the same. For example, if I had a picture of a hand >holding a coin, and exchanged a dish for the hand (at the same curvature), >without making any other changes, would that influence the use of "in"? > >More generally, I'd like to know if you know of any work that examines the >animacy of either the Figure or the Ground as a factor that speakers use >to assign a spatial term to a scene, including references about languages >that don't allow the same term to be used with both animate and inanimate >Figures/Grounds (Scott DeLancey mentioned Tibeto-Burman as one that >doesn't allow the same term to be used for human and inanimate Grounds). > >Thanks for any help, > >Michele > >Michele Feist >Department of Linguistics >Northwestern University >2016 Sheridan Road >Evanston, IL 60208 > >m-feist at nwu.edu Dear Michelle, Is this the kind of thing you have in mind: In English, you can say "I went to the President" but not "*I'm at the President." Compare "I went to the White House" and "I'm at the White House." Suppose Harry is lying on the ground. You can say My jacket is across Harry if it is on top of him stretched across him, but not if it is on the other side of him. Compare with My glass is across the table *My glass is across Harry The latter is out even if Harry is stretched out on the floor and my glass is on the other side of him. By the way, Postal's old live/nonlive distinction does not occur in this case: *My glass is across the corpse. is no good, even if the corse is spread out in front of you on the floor and your glass is on the other side. English is a good language to look at for such phenomena. Incidentally, metaphor matters here. "He's always at his mother" works only in the metaphorical sense of "at." Compare with "He's always at his mother's" and "He always goes to his mother." Other interesting phenomena: I came across Harry. cannot be used if you ran into him on the street. It works fine if Harry is treated as an object: I came across Harry unconscious in a dumpster. Here Postal's live/nonlive distinction does matter. If you came across Harry's dead body, you can't describe it as I came across Harry in the morgue. English is fun. Enjoy! George From SAMG at PUCC.PRINCETON.EDU Mon Jun 22 12:05:24 1998 From: SAMG at PUCC.PRINCETON.EDU (Sam Glucksberg) Date: Mon, 22 Jun 1998 08:05:24 EDT Subject: No subject Message-ID: suspend From shelli at BABEL.LING.NWU.EDU Mon Jun 22 19:54:47 1998 From: shelli at BABEL.LING.NWU.EDU (Michele Feist) Date: Mon, 22 Jun 1998 14:54:47 -0500 Subject: animacy and spatial terms In-Reply-To: Message-ID: Dear George, These are certainly the kinds of examples I'm interested in; thanks for sending them on! I'm not familiar with Postal's live/non-live distinction; could you send me a reference so I could read more about it? Also, do you know of any references for work that's looked at the kinds of contrasts you mentioned? Thanks again for all your help! Michele On Mon, 22 Jun 1998, George Lakoff wrote: > Dear Michelle, > > Is this the kind of thing you have in mind: > > In English, you can say "I went to the President" but not "*I'm at the > President." > Compare "I went to the White House" and "I'm at the White House." > > Suppose Harry is lying on the ground. You can say > My jacket is across Harry > if it is on top of him stretched across him, but not if it is on the other > side of him. Compare with > My glass is across the table > *My glass is across Harry > The latter is out even if Harry is stretched out on the floor and my glass > is on the other side of him. > By the way, Postal's old live/nonlive distinction does not occur in > this case: > *My glass is across the corpse. > is no good, even if the corse is spread out in front of you on the floor > and your glass is on the other side. > > English is a good language to look at for such phenomena. > > Incidentally, metaphor matters here. "He's always at his mother" works only > in the metaphorical sense of "at." > Compare with "He's always at his mother's" and "He always goes to his mother." > > Other interesting phenomena: > I came across Harry. > cannot be used if you ran into him on the street. It works fine if Harry is > treated as an object: > I came across Harry unconscious in a dumpster. > Here Postal's live/nonlive distinction does matter. If you came across > Harry's dead body, you can't describe it as > I came across Harry in the morgue. > > English is fun. > > > Enjoy! > > George > > > Michele Feist Department of Linguistics Northwestern University 2016 Sheridan Road Evanston, IL 60208 m-feist at nwu.edu From eitkonen at UTU.FI Tue Jun 23 21:56:34 1998 From: eitkonen at UTU.FI (Esa) Date: Tue, 23 Jun 1998 12:56:34 -0900 Subject: 'totally novel sentence' Message-ID: Dear colleagues I am about to finish a paper on which I have been working some time and, just to make sure that I have got everything right, I have to ask you the following question. In the not-so-distant past it was widely claimed that speaker-hearers constantly encounter and understand (utterances of) 'totally novel sentences'. Did anyone of you understand what was meant by this curious statement? I certainly did not. If I know the language in question, every sentence that I hear has some obvious similarities (or analogies) to sentences that I have heard before. I never hear sentences exemplifying totally novel sentence structures or containing totally novel grammatical morphemes, and I seldom hear sentences containing totally novel lexical units. So, to repeat, did anyone of you ever understand what was meant by this often-repeated slogan? Esa Itkonen From john at RESEARCH.HAIFA.AC.IL Tue Jun 23 11:05:41 1998 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Tue, 23 Jun 1998 14:05:41 +0300 Subject: novel sentences Message-ID: I think what they meant was sentences which were not 100% identical to sentences which had been used before (or perhaps which the speaker had heard before). For example, if I write `The purple and green fox gave 157 turnips to the piebald cow', this is a sentence which in all likelihood has never been used before in the history of the world, so it is on this understanding a `totally novel sentence,' even if whatever words and structures are used are analogous to similar structures in common sentences. I'm pretty sure that this was the idea. I think they were arguing against an extremely naive position (probably a strawman) which might be attributed to super-behaviorism saying something like people only reproduce sentences which are exact copies of sentences which they've already heard. That's the only sense I could make of such statements-- the whole discussion seems silly and pointless and the kind of argument you would only need to make to an 8-year-old. John Myhill From delancey at OREGON.UOREGON.EDU Tue Jun 23 15:26:57 1998 From: delancey at OREGON.UOREGON.EDU (Scott Delancey) Date: Tue, 23 Jun 1998 08:26:57 -0700 Subject: 'totally novel sentence' In-Reply-To: Message-ID: I always assumed (and, I think, was probably explicitly taught somewhere along the line) that all it meant was a sentence which is not EXACTLY identical to any sentence that you've heard before. (Even that, of course, is not as true as a lot of people like to assume). Scott DeLancey From dryer at ACSU.BUFFALO.EDU Tue Jun 23 17:38:40 1998 From: dryer at ACSU.BUFFALO.EDU (Matthew S Dryer) Date: Tue, 23 Jun 1998 13:38:40 -0400 Subject: novel sentences In-Reply-To: Message-ID: With respect to John Myhill's "the whole discussion seems silly and pointless and the kind of argument you would only need to make to an 8-year-old", it is my experience that most lay people, such as undergraduates in intro linguistics classes, are sufficiently naive about language that nearly nothing is obvious to them and that this kind of observation,with elaboration by example, is in fact quite instructive. The fact that so many of the sentences we hear are novel does seem to me an important and fundamental property of language. Furthermore, it represents a fundamental difference between sentences and words (at least for most languages). From that perspective, I see this as hardly "silly and pointless". It is true that this property of language was used in arguments against behaviourism - and not just a strawman position, but versions of behaviourism that were once dominant in psychology - but I would have thought that this was something sufficiently basic to be something that is common ground for nearly all linguists, formalist, functionalist, cognitivist or whatever. I suspect, as Scott Delancey suggests, that the novelty of sentences one hears is probably exaggerated, but the basic point still holds. Matthew Dryer From slobin at COGSCI.BERKELEY.EDU Tue Jun 23 18:11:16 1998 From: slobin at COGSCI.BERKELEY.EDU (Dan I. SLOBIN) Date: Tue, 23 Jun 1998 11:11:16 -0700 Subject: 'totally novel sentence' In-Reply-To: Message-ID: This was never a puzzle to me; rather, I found it a great insight when I first encountered it as an undergraduate in the late fifties, and have been passing it on to my students ever since. Simply put: you almost never enocunter the same sentence twice--i.e., syntactic construction plus lexical items. (What is novel is each unique combination of words and morphosyntactic patterns.) The consequence is that, although you can learn your vocabulary by rote, you can't learn your sentences by rote. Ergo, language acquisition must be conceived of as a "generative" or "constructivist" accomplishment. The achievements of recent decades of linguistics do not dim this insight, but merely make it more important: to be sure, the child learns syntactic patterns, constructions, rich lexical entries, and so forth--but each actual utterance, produced or received, calls upon general processing skills (to use another old term, "competence"). -Dan Slobin Psychology, UC Berkeley On Tue, 23 Jun 1998, Esa wrote: > Dear colleagues > I am about to finish a paper on which I have been working some > time and, just to make sure that I have got everything right, I have to > ask you the following question. In the not-so-distant past it was widely > claimed that speaker-hearers constantly encounter and understand > (utterances of) 'totally novel sentences'. Did anyone of you understand > what was meant by this curious statement? I certainly did not. If I know > the language in question, every sentence that I hear has some obvious > similarities (or analogies) to sentences that I have heard before. I > never hear sentences exemplifying totally novel sentence structures or > containing totally novel grammatical morphemes, and I seldom hear > sentences containing totally novel lexical units. So, to repeat, did > anyone of you ever understand what was meant by this often-repeated slogan? > > Esa Itkonen > From jrubba at POLYMAIL.CPUNIX.CALPOLY.EDU Tue Jun 23 20:09:34 1998 From: jrubba at POLYMAIL.CPUNIX.CALPOLY.EDU (Johanna Rubba) Date: Tue, 23 Jun 1998 13:09:34 -0700 Subject: 'totally novel sentence' Message-ID: I always understood 'totally novel sentence' to mean a totally new _combination_ of previously existing units -- the fundamental insight behind this being that grammr provides us with underspecified patterns into which existing forms can be plugged, enabling the creativity which is one element distinguishing human language from animal communication systems. That this creativity is theoretically infinite was, I have been trained to believe, an important insight of generative linguistics. I find this nontrivial. But this great insight does have a downside -- I believe it helped drive a wedge between semantics and syntax by giving the impression that the patterns were the core of the grammar, and you could plug any old thing into them and get a 'grammatical' sentence, such as the turnip doozie contributed by John Myhill. 'Grammatical, but semantically deviant' has been with us ever since. Why exclude the potential for making sense of a particular concatenation of lexical items from the purview of 'grammaticality'? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Johanna Rubba Assistant Professor, Linguistics ~ English Department, California Polytechnic State University ~ San Luis Obispo, CA 93407 ~ Tel. (805)-756-2184 Fax: (805)-756-6374 ~ E-mail: jrubba at polymail.calpoly.edu ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From amnfn at WELL.COM Tue Jun 23 20:58:36 1998 From: amnfn at WELL.COM (A. Katz) Date: Tue, 23 Jun 1998 13:58:36 -0700 Subject: No subject Message-ID: Dan Slobin observed as follows: >the child learns syntactic patterns, constructions, rich >lexical entries, and so forth--but each actual utterance, produced or >received, calls upon general processing skills (to use another old term, >"competence"). I think functionalists and generativists agree that general processing skills are required in order to produce or comprehend language. (The big question is whether any of the actual rules are pre-wired, or they are learned by exposure using generalized cognitive mechanisms of pattern recognition that apply to many other acitivities besides language processing.) What's rather interesting is that the observation that hitherto unuttered sentences are comprehensible and have an agreed meaning is true of not just natural language. It works for computer languages as well. You can write a new program in any computer language using a combination of commands that was never before juxtaposed in quite that way, and provided you have not made a syntax error, the program will run and do exactly what you told it to do. (Which may or may not be what you intended.) Likewise, the `objective' meaning of an utterance in a given speech community can be demonstrated by the phenomenon of hearers consistently interpreting a statement one way when the speaker intended it to mean something else. "That may be what you meant," people have been known to pronounce, "but it's certainly not what you said." The cognitive mechanism behind comprehension -- whether it be generative or not -- is not implicated by the fact of relatively original utterances having predetermined meanings. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From ellen at CENTRAL.CIS.UPENN.EDU Tue Jun 23 21:53:38 1998 From: ellen at CENTRAL.CIS.UPENN.EDU (Ellen F. Prince) Date: Tue, 23 Jun 1998 17:53:38 EDT Subject: No subject In-Reply-To: Your message of "Tue, 23 Jun 1998 13:58:36 PDT." <199806232058.NAA26691@well.com> Message-ID: "A. Katz" wrote: >I think functionalists and generativists agree that general processing skills >are required in order to produce or comprehend language. (The big question is >whether any of the actual rules are pre-wired, or they are learned by exposure >using generalized cognitive mechanisms of pattern recognition that apply to >many other acitivities besides language processing.) > >What's rather interesting is that the observation that hitherto unuttered >sentences are comprehensible and have an agreed meaning is true of not just >natural language. It works for computer languages as well. > >You can write a new program in any computer language using a combination of >commands that was never before juxtaposed in quite that way, and provided you >have not made a syntax error, the program will run and do exactly what you >told it to do. (Which may or may not be what you intended.) > >Likewise, the `objective' meaning of an utterance in a given speech community >can be demonstrated by the phenomenon of hearers consistently interpreting a >statement one way when the speaker intended it to mean something else. "That >may be what you meant," people have been known to pronounce, "but it's >certainly not what you said." > >The cognitive mechanism behind comprehension -- whether it be generative or >not -- is not implicated by the fact of relatively original utterances having >predetermined meanings. But what makes the notion of novel sentences interesting for natural language is precisely the issue of acquisition, which you alluded to in your first paragraph but dropped. The issue of acquisition of computer languages is rather different... From Jon.Aske at SALEM.MASS.EDU Tue Jun 23 23:23:36 1998 From: Jon.Aske at SALEM.MASS.EDU (Jon Aske) Date: Tue, 23 Jun 1998 19:23:36 -0400 Subject: 'totally novel sentence' In-Reply-To: Message-ID: I think nobody would disagree with the claim that we all learned in Linguistics 101 that the number of possible sentences in a language is infinite. On the other hand, there is no doubt that a lot of the sentences that are uttered by speakers are not novel and that collocations of all types abound, as Fillmore and many others have rightly emphasized. I wish somebody would finally listen to Bolinger's suggestion (see below) and actually work out some plausible estimates for the degree of novelty that is actually found in the speech, as opposed to the theoretical upper limit. Now, that would be an interesting psycholinguistic finding, I would think. (Perhaps someone has already speculated about this or done actual empirical research on this that I am not aware of). Anyway, here is the quote that I am so fond of: "The distinctive trait of generative grammar is its aim to be an ACTIVE portrait of grammatical processes. It departs from traditional grammar, which consists chiefly in the MAPPING of constructions. How much actual invention, on this model, really occurs in speech we shall know only when we have the means to discover how much originality there is in utterance. At present we have no way of telling the extent to which a sentence like I went home is the result of invention, and the extent to which it is a result of repetition, countless speakers before us having already said it and transmitted it to us in toto. Is grammar something where speakers 'produce' (i.e. originate) constructions, or where they 'reach for' them, from a preestablished inventory, when the occasion presents itself? If the latter, then the MATCHING technique of traditional grammar is the better picture--from this point of view, constructions are not produced one from another or from a stock of abstract components, but filed side by side, and their interrelationships are not derivative but mnemonic--it is easier to reach for something that has been stored in an appropriate place. Probably grammar is both of these things, but meanwhile the transformationist cannot afford to slight the spectrum of utterances which are first of all the raw material of his generalization s and last of all the test of their accuracy." (Bolinger 1961:381, Syntactic blends and other matters, A sort of review of R.B. Lees's Multiply Ambiguous Adjectival Constructions in English.) ------------------------------------------------------------- Jon Aske -- Jon.Aske at salem.mass.edu -- aske at earthlink.net Department of Foreign Languages, Salem State College Salem, Massachusetts 01970 - http://home.earthlink.net/~aske/ ------------------------------------------------------------- Too many pieces of music finish too long after the end. --Igor Stravinsky (1882-1971). From Jon.Aske at SALEM.MASS.EDU Tue Jun 23 23:52:10 1998 From: Jon.Aske at SALEM.MASS.EDU (Jon Aske) Date: Tue, 23 Jun 1998 19:52:10 -0400 Subject: 'totally novel sentence' In-Reply-To: Message-ID: After I sent the previous message with Bolinger's quote, I found another beautiful quote from his absolutely wonderful article "Meaning and Memory" (1979), which makes a similar point (cf. the second paragraph). I just can't resist the temptation to forward it to you. I am sorry if you don't find it as inspiring as I do :-) (By the way, "Syntactic blends and other matters" was in Language 37:3:366-381) "For a long time now linguists have been reveling in Theory with a capital T. If you assume that language is a system o? tout se tient--where everything hangs together--then it follows that a connecting principle is at work, and the linguist's job is to construct a one-piece model to account for everything. It can be a piece with many parts and subparts, but everything has to mesh. That has been the overriding aim for the past fifteen years. But more and more evidence is turning up that this view of language cannot be maintained without excluding altogether too much of what language is supposed to be about. In place of a monolithic homogeneity, we are finding homogeneity within heterogeneity. Language may be an edifice where everything hangs together, but it has more patching and gluing about it than architectonics. Not every monad carries a microcosm of the universe inside; a brick can crumble here and a termite can nibble there without setting off tremors from cellar to attic. I want to suggest that language is a structure, but in some ways a jerrybuilt structure. That it can be described not just as homogeneous and tightly organized, but in certain of its aspects as heterogeneous but tightly organized. Specifically what I want to challenge is the prevailing reductionism--the analysis of syntax and phonology into determinate rules, of words into determinate morphemes, and of meanings into determinate features. I want to take an idiomatic rather than an analytic view and argue that analyzability always goes along with it opposite at whatever level, and that our language does not expect us to build everything starting with lumber, nails, and blueprint. Instead it provides us with an incredibly large number of prefabs, which have the magical property of persisting even when we knock some of them apart and put them together in unpredictable ways." (Bolinger 1979:95-96, Meaning and memory, in Haydu, George G., ed., Experience forms: Their cultural and individual place and function, World anthropology, The Hague: Mouton) ------------------------------------------------------------- Jon Aske -- Jon.Aske at salem.mass.edu -- aske at earthlink.net Department of Foreign Languages, Salem State College Salem, Massachusetts 01970 - http://home.earthlink.net/~aske/ ------------------------------------------------------------- If this is coffee, please bring some tea; but it this is tea, please bring me some coffee. --Abraham Lincoln. From amnfn at WELL.COM Wed Jun 24 02:44:14 1998 From: amnfn at WELL.COM (A. Katz) Date: Tue, 23 Jun 1998 19:44:14 -0700 Subject: No subject Message-ID: "Ellen F. Prince" wrote: >But what makes the notion of novel sentences interesting for natural >language is precisely the issue of acquisition, which you alluded to >in your first paragraph but dropped. The issue of acquisition of >computer languages is rather different... My point is that there are no implications for the manner in which language acquisition is achieved from the phenomenon of novel sentences. This is because the phenomenon itself is not limited to human language, but is an inherent part of any sort of `language'. The possibility of `novel sentences' is built into any abstract code that carries information, regardless of how that code came into being, or what devices are used in order to interpret it. DNA code, which presumably came into being by a long and tortuous evolutionary path, also admits of novel sequences. To the extent that we are able to decode DNA sequences, we would be able to predict a resultant mutation from a change in the sequence, before any such change and mutation occurred. That is, novel sentences do have objective meaning. And yet we can hardly suppose that the chemicals on which DNA code operate have been fitted with `a language acquisition device.' (`Translation' of DNA code into the appropriate amino acids takes place through chemical reactions that are not unique to DNA interpretation.) The code is self-executing and there is no centralized control over the process by anything resemblng a language acquisition device. On the other hand, the computer example that I mentioned earlier has precisely such a device (the CPU), since it was designed for a particular purpose and manufactured to specification. The basic rules of information coding are universal and trascend the physical mechanisms that make use of the code. Assuming that we know nothing else about human beings besides the rules of the languages they speak, (including the fact that novel sentences have more or less predetermined meanings within a language community), this would tell us nothing about whether the sentences are interpreted -- or the language is acquired -- through a structure in the brain that is dedicated to language acquisition, or through a more flexible priming mechanism involving pattern recognition after repeated exposure. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From john at RESEARCH.HAIFA.AC.IL Wed Jun 24 05:59:59 1998 From: john at RESEARCH.HAIFA.AC.IL (John Myhill) Date: Wed, 24 Jun 1998 08:59:59 +0300 Subject: novel sentences Message-ID: My mention of `8-year-old' may perhaps have been hyperbola, but in fact I cannot remember ever having heard an undergraduate student, however naive, express a view which suggested that they were under the impression that human beings could only repeat sentences which they had literally heard or read, word for word. I would certainly agree that lay people are naive about many things regarding language, but in my experience this does not seem to be one of them. However, this may be a consequence of the time I was educated (grad school late 70's and early 80's) and the time I have been teaching. Some of the statements I have read about language in the 1950's sound so bizarre from a contemporary standpoint that I can only assume that given certain assumptions specific to certain cultures and times, what is intuitively obvious or trivial at one point in time might be a deep insight at another. John Myhill >With respect to John Myhill's "the whole discussion seems silly and >pointless and the kind of argument you would only need to make to an >8-year-old", it is my experience that most lay people, such as >undergraduates in intro linguistics classes, are sufficiently naive about >language that nearly nothing is obvious to them and that this kind of >observation,with elaboration by example, is in fact quite instructive. >The fact that so many of the sentences we hear are novel does seem to me >an important and fundamental property of language. Furthermore, it >represents a fundamental difference between sentences and words (at least >for most languages). From that perspective, I see this as hardly "silly >and pointless". > >It is true that this property of language was used in arguments against >behaviourism - and not just a strawman position, but versions of >behaviourism that were once dominant in psychology - but I would have >thought that this was something sufficiently basic to be something that is >common ground for nearly all linguists, formalist, functionalist, >cognitivist or whatever. > >I suspect, as Scott Delancey suggests, that the novelty of sentences one >hears is probably exaggerated, but the basic point still holds. > >Matthew Dryer From eitkonen at UTU.FI Wed Jun 24 20:45:53 1998 From: eitkonen at UTU.FI (Esa) Date: Wed, 24 Jun 1998 11:45:53 -0900 Subject: novelty Message-ID: Dear colleagues Thank you for the responses (some of which I got privately); they were all useful. The consensus seems to be that in this particular context these two expressions are synonymous: 'A is completely novel with respect to B' = 'A is not exactly identical with B'. In any other context, of course, they are not synonymous, so they should not be it here either. From this notion of 'complete novelty' it follows, for instance, that a grammar as simple as the one consisting of rules 'S -> Sa' and 'S-> a' generates an infinite number of completely novel sentences. Therefore my sympathy is with Fred Householder,who - in a review in 1969 - commented upon the claim of complete novelty as follows: "[This is] a claim so obviously false that [those who make it] must mean something else, though I cannot for the life of me figure out what." Incidentally, the fact that most sentences that we hear are new, i.e. not repetitions of what we have heard before, was duly noticed by linguists like Hermann Paul and Bloomfield. It was also a common-place in the grammatical traditions of India and Arabia. Esa Itkonen From dick at LINGUISTICS.UCL.AC.UK Wed Jun 24 09:06:47 1998 From: dick at LINGUISTICS.UCL.AC.UK (Dick Hudson) Date: Wed, 24 Jun 1998 10:06:47 +0100 Subject: novel sentences Message-ID: Matthew Dryer and Dan Slobin both think it is worth pointing out that every sentence is novel. Does anyone have any evidence that anyone ever thought otherwise? The only evidence I can think of is Noam Chomsky's odd definition of a language as a set of sentences. Do lay people think that when they take a course in (say) German they're going to learn a list of sentences? I'd have thought that lay people were much more likely to think of a language as a set of words. Maybe I'm focussing on the wrong question. Are we really asking whether lay people are aware that there are rules controlling the ways in which words are combined? If so that's a very different question, because it's possible to define all the possible combinations of words without mentioning sentences at all. (That's how it's done in dependency grammars.) ============================================================================== Richard (=Dick) Hudson Department of Phonetics and Linguistics, University College London, Gower Street, London WC1E 6BT work phone: +171 419 3152; work fax: +171 383 4108 email: dick at ling.ucl.ac.uk web-sites: home page = http://www.phon.ucl.ac.uk/home/dick/home.htm unpublished papers available by ftp = ....uk/home/dick/papers.htm From macw at CMU.EDU Wed Jun 24 17:24:26 1998 From: macw at CMU.EDU (Brian MacWhinney) Date: Wed, 24 Jun 1998 11:24:26 -0600 Subject: novelty, Heraclitus, and children Message-ID: I think that Aya Katz is correct in noting that "The possibility of `novel sentences' is built into any abstract code that carries information, regardless of how that code came into being, or what devices are used in order to interpret it." The problem is even more general that that. As Heraclitus reminds us, "You could not step twice into the same rivers, for other waters are ever flowing on to you." Does this lead us to attribute creativity and novelty to rivers? Surely something is missing in any such analysis. I would agree with Householder that the claim of novelty is so obvious that it is surprising that it is even made. After all, even a finite state machine can produce many "novel" strings and no one would jump up and down about a language instinct on the basis of knowing that language is a finite state automaton. At the same time, Ellen Prince and Dan Slobin are correct in pointing out the importance of something close to this issue for child language acquisition. I would argue that the type of novelty that is really interesting for child language researchers is something very different from what has been mentioned in this discussion so far. It is the fact that children often produce sound strings, words, and utterances that diverge in revealing ways from the norms of the adult community. What is crucial is not novelty, but the failure to fully internalize or obey social norms. These errors demonstrate that the child is making his or her independent contribution to the language learning process. In this sense, "errors" such as "I poured my glass empty" reveal creative aspects of language learning. For each creative error there are probably eight creative productions that just happen to match the social norms. Bolinger's view of language as built up from disparate pieces helps us out here. The child has one piece called "go" and another piece called "past tense = ed" and just doesn't remember that the socially sanctioned way of saying this is "went". It is this creativity that demonstrates a lack of full internalization of the social norms and which also gives us some of our best evidence regarding how the child learns and uses language. --Brian MacWhinney From dhargreave at FACULTYPO.CSUCHICO.EDU Wed Jun 24 17:31:00 1998 From: dhargreave at FACULTYPO.CSUCHICO.EDU (David Hargreaves) Date: Wed, 24 Jun 1998 10:31:00 -0700 Subject: novel sentences/folk psychology Message-ID: It seems to me useful to distinguish two issues here. First, there are the empirical questions regarding the biological/cognitive status of generative rules vs constructions and so on. Second, there are the questions regarding the "folk psychology" of language rules. Let me address the latter: In teaching thousands of undergraduates over the last six years at CSU, Chico, a small state school in California that requires an Intro to Linguistics and Intro to Second Language Acquisition for all (k-12) teaching credential candidates, I have learned never to underestimate the depth t o which a naive behaviorism is part of the folk psychology of not only undergraduates, but also faculty in Education, Social Sciences, and the Humanities. The naive, but deeply held, intuitions that parents "teach" their children language, that language structure is "conditioned" by culture, that grammatical systematicity and material/intellectual culture are coextensive, and that language learning is mostly "memorizing phrases" are still widely shared across the social and intellectual landscapes. In this sense, giving undergraduates a close look at the evidence and arguments about "novel sentences," especially L1 and L2 errors, the "poverty of the stimulus," "colorless green ideas," and other mainstays of the linguistics introduction still function as powerfully persuasive tools for real intellectual and attitudinal growth by the socially important population of K-12 teachers, not to mention faculty in Ed, Soc.Sci, and Humanities. And even though funknetters have had much to say about the shortcomings of Pinker's "Language Instinct" and the Human Language Video series, both have worked for me in opening the eyes of many an undergraduate as well as faculty. The "Standard Social Science Model" to which Pinker refers is alive and well in various incarnations of postmodernism, cultural studies, multiculturalism, and other common themes in contemporary undergraduate programs in the US, especially teacher training programs, in which language as an information processing and embodied cognitive system plays second fiddle to the focus on socioeconomic and cultural determination of language form and content. I've had some success with a bait and switch routine: the old arguments still work to undermine the naive behaviorism which then sets the stage for bringing in cognitive/functionalist questions. It seems to work, at least some of the time. -david hargreaves From kaitire at UNICAMP.BR Wed Jun 24 23:17:44 1998 From: kaitire at UNICAMP.BR (Andres Pablo Salanova) Date: Wed, 24 Jun 1998 20:17:44 -0300 Subject: New mailing list for South American indigenous languages Message-ID: ** Instrucciones sobre como obtener una descripcion de la lista en castellano y portugues al final de este mensaje. ** Instrucoes para obter uma descricao da lista em espanhol e portugues, no final desta mensagem. Our apologies if you receive this message more than once. ==== LING-AMERINDIA ==== DISCUSSION LIST FOR SOUTH AMERICAN INDIGENOUS LANGUAGES The LING-AMERINDIA list was proposed at the Indigenous langages workgroup at the XIII National Congress of the Brazilian Association of Graduate Programs in linguistics. It is intended for open discussion of problems in the description and analysis of syntax, morphology, phonology and lexicon of South American indigenous languages. Postings should preferably be in Spanish or Portuguese. All postings will be archived and will shortly be accessible through anonymous FTP and WWW. To subscribe, send an e-mail message with SUBSCRIBE in the first line of the body to LING-AMERINDIA-request at unicamp.br. Postings should be sent to LING-AMERINDIA at unicamp.br. ---------------------------------------------------------------------- \__ LING-AMERINDIA / --, Informaciones: envie un mensaje con HELP LING-AMERINDIA | ] en la primera linea a la direccion "Comandos" dada abajo. \ | Informacoes: enviem uma mensagem com HELP LING-AMERINDIA | / na primeira linha ao endereco "Comandos" dado embaixo. \ | \| . Comandos: LING-AMERINDIA-request at unicamp.br . Supervisor: LING-AMERINDIA-owner at unicamp.br ---------------------------------------------------------------------- From amnfn at WELL.COM Thu Jun 25 16:13:51 1998 From: amnfn at WELL.COM (A. Katz) Date: Thu, 25 Jun 1998 09:13:51 -0700 Subject: No subject Message-ID: Jon Aske on Tue, 23 Jun 1998 19:23:36 -0400 wrote: >I think nobody would disagree with the claim that we all learned in >Linguistics 101 that the number of possible sentences in a language >is infinite. I didn't respond immediately, because I wanted to see if anyone else would disagree or have any comments on this point. The number of possible sentences in a language is infinite, only if we assume the following: a) that there is no upper bound on the length of a possible sentence and b) that there isn't a rate of historical change associated with repeated use that would eventually lead to the evolution of a form of the language that is not intelligible to the speakers of the earlier sentences. The second issue is very complicated and would require too lengthy a discussion. But the first issue is pretty simple. Assuming that we are not dealing with a mathematical construct, but are talking about language as it is used by human beings, there are physical limitations to our processing abilities in real time. Give a speaker too long and complicated a sentence, and he will not be able to understand it. While the exact limit may vary from individual to individual, I think that we could establish a factual upper bound that would hold true for the species as a whole. Writing allows for longer sentences, because it permits us more time in which to process and requires less of our storage capacity. But even in writing, there is an upper bound past which no one -- not even a well-educated German :-> -- is able to retain in short term memory the variables at the start of a sentence in order to properly appreciate their logical effect on input toward the end of the sentence. So long as there is an upper bound to the length of a possible sentence, then the number of possible sentences in a language is not infinite. (It may be very large, allowing for an immense number of novel sentences to be uttered in one lifetime, but -- even given an immortal speaker -- generating an infinite number of sentences in an unchanging language would eventually lead to repetition.) I'm pretty sure that I am not the first to have thought of this. Can anybody provide me with citations to existing texts in which this argument is made? I would be most grateful. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From r.woolley at ZAZ.SERVICOM.ES Thu Jun 25 18:10:40 1998 From: r.woolley at ZAZ.SERVICOM.ES (Reuben Woolley) Date: Thu, 25 Jun 1998 20:10:40 +0200 Subject: Novelty Message-ID: A. Katz wrote: > > Jon Aske on Tue, 23 Jun 1998 19:23:36 -0400 > wrote: > > >I think nobody would disagree with the claim that we all learned in > >Linguistics 101 that the number of possible sentences in a language > >is infinite. > > I didn't respond immediately, because I wanted to see if anyone else > would disagree or have any comments on this point. > > The number of possible sentences in a language is infinite, only if > we assume the following: > > a) that there is no upper bound on the length of a possible > sentence > > and > > b) that there isn't a rate of historical change associated with > repeated use that would eventually lead to the evolution of a form of > the language that is not intelligible to the speakers of the earlier > sentences. > Even if we fix an upper bound to the length of a possible sentence based on comprehensibility and varying from one individual to another, I would still suggest that the number of possible sentences in a *living* language is infinite. The discussion so far only seems to be concerned with grammatical generation and has not taken lexis into account at all. What certainly is elementary knowledge is that new words are introduced continually and old words are given new meanings. Therefore, to have a limited upper bound to the number of possible sentences would mean that, as well as trying to measure the limit of intelligiblity, we would have to fix the language at some moment in time. I can't see that there is any interest in doing that. Reuben Woolley c/ Almagro, 5 50004 Zaragoza Spain From meira at RUF.RICE.EDU Fri Jun 26 06:29:13 1998 From: meira at RUF.RICE.EDU (Sergio Meira S.C.O.) Date: Fri, 26 Jun 1998 01:29:13 -0500 Subject: your mail In-Reply-To: <199806251613.JAA12049@well.com> Message-ID: Aya Katz wrote: > >I think nobody would disagree with the claim that we all learned in > >Linguistics 101 that the number of possible sentences in a language > >is infinite. > > I didn't respond immediately, because I wanted to see if anyone else > would disagree or have any comments on this point. > > The number of possible sentences in a language is infinite, only if > we assume the following: > > a) that there is no upper bound on the length of a possible > sentence > > and > > b) that there isn't a rate of historical change associated with > repeated use that would eventually lead to the evolution of a form of > the language that is not intelligible to the speakers of the earlier > sentences. Someone once made the following comparison: saying the number of possible sentences in a language is infinite (which I also interpret as implying the possibility of sentences of infinite length) is like saying that a baseball or volleyball match could last forever. And, in both cases, it is true that they actually don't-- neither do sentences of infinite length occur (just imagine the philosophical/pratical problems involved!-- wouldn't fit in this universe, etc.), nor endless baseball/volleyball matches. Yet there is a difference between volleyball (which I, being from Brazil, know better than this arcane game called baseball) and soccer or basketball, where there is a real time limit that must be respected. Sentences are never infinite-- but they don't seem to be bounded either. You can never point to a certain length and say, that's the boundary-- shorter than that is OK, longer than that is impossible. I take this to be the 'grain of truth' behind the entire infiniteness-of-language discussion in formalist circles (i.e. how real-world contingencies force all sentences to end, but as a performance phenomenon rather than as a competence one). When functionalists say, 'sentences aren't infinite!', formalists think they mean that volleyball is like basketball. And when formalists say, 'sentences are theoretically infinite', functionalists think they mean real-life volleyball could go on forever. Is it possible that both sides are exaggerating something? Just a thought... Sergio Meira meira at ruf.rice.edu From jhudson at CUP.CAM.AC.UK Fri Jun 26 09:55:08 1998 From: jhudson at CUP.CAM.AC.UK (Jean Hudson) Date: Fri, 26 Jun 1998 10:55:08 +0100 Subject: novel sentences Message-ID: At 09:13 25/06/98 -0700, Aya Katz wrote: >The number of possible sentences in a language is infinite, only if >we assume the following: > > a) that there is no upper bound on the length of a possible > sentence > > and > > b) that there isn't a rate of historical change associated with >repeated use that would eventually lead to the evolution of a form of >the language that is not intelligible to the speakers of the earlier >sentences. [...} >So long as there is an upper bound to the length of a possible >sentence, then the number of possible sentences in a language is not >infinite. (It may be very large, allowing for an immense number of >novel sentences to be uttered in one lifetime, but -- even given an >immortal speaker -- generating an infinite number of sentences in an >unchanging language would eventually lead to repetition.) I agree that the first issue is relatively simple, but either we ARE dealing with a mathematical construct, in which case it's fair to hypothesize an upper limit to the number of possible sentences immortal speakers of an unchanging language could produce. Or we're talking about language as it is used - in changing ways - by mere mortals. Maybe there's sth I've misinterpreted here, but Aya seems to be claiming the latter while arguing that productivity is finite 'given an immortal speaker and an unchanging language'. I'd say the dichotomy is non-existent: it rests upon differences in the focus of interest of the interlocuters in the debate. Logicians and theorists might argue that the number of possible sentences in a language is finite; functional, applied, and descriptive linguists might argue that it is infinite. Both are right, of course. They are using the same make of camera with different lenses. This kind of discussion tends to be irritating. The second issue is much more interesting. 'Repetition' is too often confused with 'lack of originality', but it is only through repetition that language change and revitalization can come about (cf the literature on grammaticalization and, in particular, Haiman 1994 on ritualization). So, there is novelty in the production of sentences never before uttered and, phoenix-like, there is novelty and language renewal in the frequent repetition of sentences (or syntagms). Surely this is evidence in support of the 'infinite' in language production? Jean Hudson ---------------------- Jean Hudson Research Editor Cambridge University Press The Edinburgh Building Cambridge CB2 2RU email: jhudson at cup.cam.ac.uk phone: +44-1223-325123 fax: +44-1223-325984 (http://www.cup.cam.ac.uk/) mail address: Cambridge University Press Publishing Division The Edinburgh Building Shaftesbury Road Cambridge CB2 2RU UK From ph1u+ at ANDREW.CMU.EDU Fri Jun 26 11:35:55 1998 From: ph1u+ at ANDREW.CMU.EDU (Paul J Hopper) Date: Fri, 26 Jun 1998 07:35:55 -0400 Subject: novel sentences In-Reply-To: Message-ID: A footnote to Jean Hudson's remarks on novelty in language: Jean didn't mention her own important recent book 'Perspectives on Fixedness: Applied and Theoretical' Lund University Press, 1998. - Paul From nc206 at HERMES.CAM.AC.UK Fri Jun 26 16:36:55 1998 From: nc206 at HERMES.CAM.AC.UK (N. Chipere) Date: Fri, 26 Jun 1998 17:36:55 +0100 Subject: Novelty Message-ID: The issue of linguistic novelty is a key theme in my current research and I would like to share my thoughts on the issue as well as some of my experimental findings. I hope I will be forgiven for the somewhat long message, but I need to get some feedback. On the face of it, the statement that language users can understand novel sentences appears obvious and redundant. However, the statement serves an important function of constraining theories about the nature of linguistic knowledge and ultimately, about the nature of the human mind. The basic argument (Fodor & Pylyshyn, 1988) is as follows: If knowledge of language is considered to be a list of sentences, then there is no way to account for the ability of native speakers to produce and understand sentences they have never heard before. On the other hand, this ability can be explained if knowledge of language is seen as an infinitely generative set of grammatical rules. And if it is accepted that native users of a language possess generative grammars, then certain important constraints on theories of cognitive architecture must be observed. Without going into the details, observing such constraints leads to the hypohesis that the mind has the general architecture of a digital computer. So when it is said that native speakers of a language can understand novel sentences, a deeper statement is being made, it seems to me, about the nature of linguistic knowledge and the about nature of the human mind. However, it doesn't follow from the fact that native speakers can understand novel sentences that knowledge of language takes the form of a generative grammar. Everyday, human beings do things they have never done before but this ability does not lead to the conclusion that their actions are the product of generative rule systems. It's quite reasonable to suppose that the ability to deal with novel situations depends on previous experience and that novel situations will become more difficult to deal with the more they stray from the range of an individual's experience. This line of thinking forms the basis of an experiment which I carried out to test the connection between novelty and generativity. According to the line of thinking outlined in Fodor & Pylyshyn (1988), all the sentences of a language belong to a generated set, and they should be equally comprehensible to native speakers. That is to say, since what a native speaker knows about his or her language is a set of rules capable of interpreting and producing any sentence in the language, a native speaker should be able to understand all possible sentences in his or her language equally well, provided that performance factors are taken into account. On the other hand, if linguistic knowledge, like other kinds of knowledge, depends on experience, then native speakers of a language should find familiar sentence types easier to understand than unfamiliar ones. I compared the ability of three groups of subjects to understand grammatically unusual sentences under conditions in which memory load was eliminated. Group 1 consisted of graduate native speakers of English, Group 2 consisted of graduate non-native speakers of English and Group 3 consisted of non-graduate native speakers of English. The subjects were asked to answer comprehension questions about sentences with highly unusual syntactic structures, such as: 1. The doctor knows that the fact that taking good care of himself is essential surprises Peter. example question: What does the doctor know? 2. The bank manager will be difficult to get the convict to give a loan to. example question: Who will find it difficult to do something? 3. The lady who Peter saw after overhearing the servant proposing to dismiss had lunch in a cafe. example qustion: Who might be dismissed? (These sentences may strike many as ungrammatical, but in fact they are simply unfamilar combinations of familiar sentence types (adapted from Dabrowska, 1997)). I obtained both comprehension and reading time data from the experiment, but I will mention only the comprehension data. The key results were that the non-native graduates obtained the highest scores, followed by the native graduates, with the least scores coming from the native non-graduates. The native non-graduates were also most affected by plausibility and often ignored syntactic constaints whereas the non-native graduates were least affected by plausibility and showed the greatest mastery of syntax. The non-native graduates, by the way, had learned English largely through formal instruction. Most of them were also speakers of East European languages, which, I have been told is one possible explanation for their facility with complex syntax. However, all groups performed equally well on control sentences, which were formed out of familar sentence types. These results give empirical support to the logical argument that there is no necessary connection between novelty and generativity. It is quite possible to generate a very large number of novel sentences out of a small number of familiar sentence types. The fact that most native speakers can understand such sentences does not entail that they can readily understand all possible sentences in the language. In other words, being a native speaker of a language and being able to understand novel sentences in that language does not entail possession of a generative grammar of the language. More details about the experimental design, materials, procedure and results are documented in an experimental report. The report also reviews previous psycholinguistic findings which indicate that native speakers of English often lack full grammatical productivity, and that education appears to be an important variable in grammatical skill. The main argument developed in the report is that linguistic ability shares many of the key traits of skilled performance and can be accounted for without recourse to an infinitely generative set of grammatical rules. I am keen to have feedback on the report, which I can make available to anyone who is interested. Ngoni Chipere Darwin College University of Cambridge From amnfn at WELL.COM Fri Jun 26 17:39:32 1998 From: amnfn at WELL.COM (A. Katz) Date: Fri, 26 Jun 1998 10:39:32 -0700 Subject: No subject Message-ID: I'm glad to see that my last posting generated some interesting responses. By and large, I do think that one point may have been misunderstood. I was not arguing that the length of a sentence in natural language is limited by the mortality of its speakers or the amount of time in the day that they can devote to speaking. While that point is valid, I would tend to agree with Jean Hudson that it is not particularly interesting. My focus was on the length limitation imposed on a sentence by the processing abilities of speakers and hearers. Sergio Meira observes: "Sentences are never infinite-- but they don't seem to be bounded either." My point is that if we are talking about natural language processing in real time -- they ARE bounded. (Although it is unclear precisely where the boundary lies, and it would require a considerable quantity of experimental data to pinpoint.) Using Sergio's analogy of baseball and volleyball matches, I would point out that the unit more comparable to a sentence in these events is probably the amount of time the ball can be kept in the air without touching the ground, not the amount of time that the game can go on. While our lives are finite and the amount of time we spend talking is limited, that is not the real limitation on the length of a sentence. The President of the United States could give a two hour speech, and people would take time out from their schedules to listen to it, but what is the likelihood that the whole speech would consist of a single sentence? Or take the Mark Twain joke about a multi-volume opus in German in which all the verbs appear in the last volume. That would never happen in real life, (not even in German), because such a work, while it might be logically meaningful, could not be processed by any human being. We may occasionally listen to a long speech or read a very long book, but in order for it to be comprehensible, it must be broken down into smaller, self-contained parts. That is a limitation placed on language by human cognition. The reason this upper bound is very interesting to me is that it has profound implications for language change and grammaticalization. I would be most grateful for any of you who could provide citations to works on this subject or who might have expertimental data that would shed light on the issue. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From meira at RUF.RICE.EDU Fri Jun 26 19:09:02 1998 From: meira at RUF.RICE.EDU (Sergio Meira S.C.O.) Date: Fri, 26 Jun 1998 14:09:02 -0500 Subject: your mail In-Reply-To: <199806261739.KAA22051@well.com> Message-ID: > > My focus was on the length limitation imposed on a sentence by the processing > abilities of speakers and hearers. Sergio Meira observes: "Sentences are > never infinite-- but they don't seem to be bounded either." My point is that > if we are talking about natural language processing in real time -- they ARE > bounded. (Although it is unclear precisely where the boundary lies, and it > would require a considerable quantity of experimental data to pinpoint.) How different is it to say that sentence length is bounded by the processing abilities of speakers and hearers from the old claim that this is a 'performance phenomenon'? And, to me, it seems fascinating that there should be a limit, but that it is hard to pinpoint... I expect there to be individual differences there, too. And the differences might also depend on sentence structure: certain kinds of clauses will probably be more difficult to lengthen infinitely (though they should be, theoretically, just as 'lengthenable' as the others). Sergio Meira meira at ruf.rice.edu From dsoliver at EARTHLINK.NET Fri Jun 26 19:50:53 1998 From: dsoliver at EARTHLINK.NET (Douglas S. Oliver) Date: Fri, 26 Jun 1998 12:50:53 -0700 Subject: Novel Sentences Message-ID: Dear Funknetters, I have been following this discussion with some interest but not with the concerns expressed so far (unless I have missed something). Over the last two + decades, many functionalists have devoted a good amount of time questioning the wisdom of giving so much weight to the sentence as a primary unit of analysis. Wally Chafe and others have done a good job of demonstrating the value of using language segments based on tone/prosody, which do have very real constraints. I would like to ask why this focus on the sentence has been renewed. This has brought us back to philosophical discussions that often only work to remove us from functional concerns. I would like to ask how we might bring cognitive, biological, social, cultural, etc. concerns into the discussion, using real discourse examples. Please don't misunderstand me, I find the discussion so far fun and generally interesting; I just wonder where the functional perspective has gone. -- Douglas -- |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~| |Douglas S. Oliver | |Department of Anthropology | |University of California | |Riverside, CA 92521 | |e-mail: dsoliver at earthlink.net | | or: douglaso at citrus.ucr.edu | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alphonce at CS.UBC.CA Fri Jun 26 20:59:55 1998 From: alphonce at CS.UBC.CA (Carl Alphonce) Date: Fri, 26 Jun 1998 13:59:55 -0700 Subject: Novel Sentences Message-ID: I have been lurking in the background, reading the discussion with interest. I am afraid I don't quite see what the real issue here is. If I recall correctly, the discussion started with a question regarding the claim that people can produce novel sentences. Since then it seems to have moved on to a discussion of whether or not the set of sentences in a language is finite or infinite. I don't think it can be disputed that the number of sentences uttered by a person, or uttered by all persons in all of history is finite. The number of speakers is finite. Each speaker has a finite lifespan. No sentence is of infinite length. An so on. At any point in time it is (theoretically) possible to construct the (finite) set of all natural language sentences every uttered by anyone in any language. The question of the nonfiniteness of such a set comes into play when you want to construct a theory to account for those sentences. By assuming that the set is in principle infinite, you can use recursive rules to capture significant generalizations about the structure of the sentences in the set. If you insist that the set is finite, then recursive rules cannot be permitted (without ad-hoc constraints on their applicability). Because people do not in fact use or comprehend sentences of arbitrary length, it is not enough to have only a theory which permits arbitrarily large sentences. But this is what the competence performance distinction is all about. A competence theory is about our idealized capacity for language, while a performance theory can be viewed as constraints of a non-grammatical nature which limit what we are able to produce and comprehend. These are abstractions that we use when investigating language. These abstractions happen to be very useful, but they are not themselves fact. Other abstractions yield theories with different properties, empirical coverage, and predictive power. Perhaps I am just being obtuse, but what is the real issue that people are discussing? Carl -- Carl Alphonce / email: alphonce at cs.ubc.ca Department of Computer Science / phone: (604) 822-8572 University of British Columbia / FAX: (604) 822-5485 Vancouver, BC, CANADA, V6T 1Z4 / http://www.cs.ubc.ca/spider/alphonce/home From amnfn at WELL.COM Fri Jun 26 23:15:37 1998 From: amnfn at WELL.COM (A. Katz) Date: Fri, 26 Jun 1998 16:15:37 -0700 Subject: No subject Message-ID: 1. ON COMPTENCE/PERFORMANCE Sergio Meira and Carl Alphonce are correct in noting that limits on sentence length due to processing difficulty are often ascribed to performance errors. I do not find this a satisfactory solution for the following reason: Generativists make a big fuss over the essentially and uniquely `human' language instinct and capacity. Carl Alphonse echoes this sentiment when he says: >A competence theory is about >our idealized capacity for language, while a performance theory can be >viewed as constraints of a non-grammatical nature which limit what we >are able to produce and comprehend. OUR idealized capacity? But what is described is an abstract construct of language, totally divorced from the limitations of human potential. `Competence theory' when used in this way is about the flexibility built into any abstract code of information, (DNA code, computer code, etc.) regardless of whether humans have any special inborn capacity for decoding it. It's about the universal rules of information theory, not about a human language instinct. As such, the `competence/performance' dichotomy is a misnomer, and not a trivial one. If I mispeak and accidentally utter a sentence where the verb does not agree with the subject, although on a good day I have no difficulty with that task, then this a performance error. But if I am unable to comprehend a sentence in my native language due to its complexity, even on the best of days -- and if all humans consistently manifest the same disability -- that's a competence problem, by any normal definition of competence. If we buy into that other, specialized meaning of competence, we give up the question of innateness before we've even begun. 2. WHY DOES IT MATTER HOW LONG A SENTENCE CAN BE? Douglas S. Oliver wrote: "I would like to ask how we might bring cognitive, biological, social, cultural, etc. concerns into the discussion, using real discourse examples." The application of the upper bound on unit length (whether you view a sentence as a logical proposition or use prosodic evidence of sentence boundaries), is important to grammaticalization theory because the drive to maintain optimal length is one of the factors responsible for the reduction and fusion of formerly independent elements into new grammmatical patterns. Ultimately this limitation on the human capacity to process shapes acceptable grammatical configurations -- and patterns of grammatical change. Show me any instance of grammaticalization -- and chances are that limits on length had something to do with it. That's why I'm looking for both theoretical and experimental work on the subject of the upper bound on sentence length. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From nrude at UCINET.COM Sat Jun 27 00:19:55 1998 From: nrude at UCINET.COM (Noel Rude) Date: Fri, 26 Jun 1998 16:19:55 -0800 Subject: Infinity Message-ID: Fellow funknetters, Thought provoking--especially Aya Katz' notion that open-endedness is inherent in any informational system. Perhaps the main distinction that unites us functionalists is the communicative theory of language: language codes and communicates information, complex information. That's why it exists, and that's why its open-endedness. Now when the structuralists focus on the clause or sentence ("the minimal unit of information"), those of us into texts should know that every clause has a unique context, and every text is unique (except, of course, where us old folks get into these "scripts"). The structuralists emphasize an infinity of sentences so they can argue for syntactic structure. We need open-endedness too because language communicates, and therefore we also need structure. One found this jabber "irritating", no doubt because of the nit-picking over infinity. Remember, infinity can never be traversed. There will never be an infinity of time. You can never pile up an infinity of words, clauses, sentences, texts. The point isn't that you can never arrive there. It's that we're headed in that direction. A large amount of human language IS novel--that should be the point. Noel From nrude at UCINET.COM Sat Jun 27 00:54:52 1998 From: nrude at UCINET.COM (Noel Rude) Date: Fri, 26 Jun 1998 16:54:52 -0800 Subject: Upper Bounds Message-ID: Howdy again, So you're looking for theoretical and experimental work on the upper bound on sentence length. Well, I don't do that kind of stuff, but you might start with things like valence theory which, for example, suggests an upper limit of three arguments for any verb. Then there's stacking up modifiers, and there's subordination. Are there languages with built in structural limits here? If so, does this corelate with cog-sci tests? Suppose you are able to describe pretty well these upper limits. Then I'd like to know why. Would it all be neural capacity? Or might there even be--as you intimate--limits imposed by information theory? Might we sometimes be too worried about the hardware and not worried enough about the software? The limit on valence probably relates to the perception of events, to the notions of volition (agent), consciousness (dative goal), and participants lacking either (patient). It also seems to corelate with three levels of topicality. What I'd like to know is whether all this derives from the limited capacity of this idiosyncratic machine (our brain), or whether it is how the world really is. And information theory--could it be any other way? What really is the relationship between the world, information, and the machine (the brain)? Noel From meira at RUF.RICE.EDU Sat Jun 27 08:26:16 1998 From: meira at RUF.RICE.EDU (Sergio Meira S.C.O.) Date: Sat, 27 Jun 1998 03:26:16 -0500 Subject: Upper Bounds In-Reply-To: <359442C4.559C@ucinet.com> Message-ID: Restrictions on 'event complexity' can bear on sentence length-- I recall from discussions on psychology that there is some limit to the number of independent entities that we can keep active at the same time in our minds (seven is the number I remember), which may imply an upper limit to the number of independent participants that can co-exist in a sentence. So, if we want to talk about more than seven independent participants, it would seem that we'd have to use more than one sentence; we'd be forced to stop a sentence even before its length in sheer number of words became unbearable. (Colin Harrison at Rice probably has the original references, I believe, in case you don't). This is, however, 'event complexity' rather than sheer sentence length. I agree with Aya and Noel that any real coding system is subject to length constraints-- e.g. DNA sequences cannot be infinite. But this reflects, as Noel pointed out, a limit of our universe. The only things that can be truly inifinite exist in the realm of abstractions par excellence -- mathematics. It is true that the series of natural numbers is infinite; but this is a consequence of the fact that Peano's axioms for natural numbers have 'adding one more' as an operation that can be repeated forever. There is nothing in the universe, not even subatomic particles, that you could keep adding to a pile forever. So, Peano's axioms are an abstraction-- mathematicians assume a world in which 'adding one more forever' is thinkable. (Incidentally, Quantum Mechanics with wave-particle dualities and indeterminacies challenges the possibility of 'adding one more' forever at the subatomic level from another viewpoint-- but this is a different story). This shows us what is going on when formalists want to see infinite-length sentences as a theoretical possibility. It would seem that there are certain aspects of language that are they way they are because the world is the way it is-- i.e. limits set by physics, chemistry, biology, etc., rather thanm by communication alone. Formalists isolate these aspects of the real world; they want to see 'language standing alone', so they put it in a separate world, where only linguistic factors count. The relationship is not unlike that between mathematics and the real world... You gain the notion of infinity, for whatever theoretical advantages it might buy you, but you have to admit that everything else is less important. Formalists have to say that all of reality, in its entirety, with all its physical, chemical, biological etc. restrictions is 'contingent', 'less important', 'non-linguistic'-- that 'reality' is 'performance'... Isn't that an interesting world... Sergio Meira P.S. I wondered if anyone knows whether real-world limitations for other coding systems also have consequences for their functioning and their structures-- i.e. any consequences of length constraints on DNA sequences for genetics? From amnfn at WELL.COM Sat Jun 27 18:43:20 1998 From: amnfn at WELL.COM (A. Katz) Date: Sat, 27 Jun 1998 11:43:20 -0700 Subject: No subject Message-ID: 1. LIMIT ON PARTICPANTS Noel Rude wrote: > So you're looking for theoretical and experimental work on the upper >bound on sentence length. Well, I don't do that kind of stuff, but you >you might start with things like valence theory which, for example, >suggests an upper limit of three arguments for any verb. Then there's >stacking up modifiers, and there's subordination. Are there languages >with built in structural limits here? If so, does this corelate with >cog-sci tests? I don't normally do that sort of thing myself, which is why I'm asking. I remember from my field work in Pangasinan that we tried to cram as many participants as we could into a sentence, and there was definitely a limit on the number you could get per clause. (1) si Anita impakan tomai mangga ed posa ed ketsara 'Anita fed the mango to the cat with a spoon' Example (1) was a possible sentence using a single clause, but I believe the informant didn't feel as comfortable with it as with (2). (2) si Anita impakan toma i mangga ed posa ya inosaran to i ketsara 'Anita fed the mango to the cat with a spoon' In (2), the `with a spoon' part is in a separate clause: `ya inosaran to i ketsara' meaning roughly `a spoon was used'. It could be that one of the reasons (1) was awkward was that you had to use `ed', an oblique marker, twice. I suppose that having a limited number of separate case or focus marking devices in a language might be one reason for the limit on participants per clause. But the fact that the number of these markers is limited may be directly related to cognitive constraints. I'd be happy to hear from people who have done research on the subject of grammatical limits on clause length and their relation to processing requirements. 2. UNIVERSAL LIMITS ON CODING VS. HUMAN LIMITS NOEL RUDE also wrote: >What I'd like to know is whether all this derives from the limited >capacity of this idiosyncratic machine (our brain), or whether it is >how the world really is. I think there are limits on communication that are universal to all abstract codes of information. I also believe that human cognitive capacity is not anywhere near those limits. It's very important to distinguish the two issues, lest we be tempted to make far flung claims about the nature of our biological language processing apparatus that are actually based on much more general principles. SERGIO MEIRA wrote: >I wondered if anyone knows whether real-world limitations for other >coding systems also have consequences for their functioning and their >structures-- i.e. any consequences of length constraints on DNA sequences >for genetics? I don't know the answer to that, but I suspect that it's `yes'. Interesting aside about novel sequences and DNA code. With the exception of mono-zygotic twins, every human being has a unique karyotype. (I'm talking about genetically normal specimens, not possessing any sort of mutation.) The number of possible `grammatical' combinations of genes that would produce a normal human being is vast -- but finite! That number is so large that, given the limits on population growth placed on human beings by the resources of the earth, chances are the sun would die out before we had a random repetition of a human being's genetic make-up. By the same token, the finite number of available sentences in an unchanging language is no practical bar to occurrence of novel sentences. But I do think that population size has something to do with the rate of change in the structure of a language. Small isolated communities tend to be linguistically conservative. Languages spoken by vast populations are more given to change. Is there a direct relation between the number of sentences uttered in a language and its rate of change? If so, is this related to the upper bound on sentence length, coupled with the desire for originality? Or is it just a matter of the fabric of the language wearing down and transmuting with use? Are there other possible explanations? --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From ebatchel at EMAIL.GC.CUNY.EDU Sun Jun 28 01:47:26 1998 From: ebatchel at EMAIL.GC.CUNY.EDU (Eleanor Olds Batchelder) Date: Sat, 27 Jun 1998 21:47:26 -0400 Subject: Length of Sentences Message-ID: Isn't there a problem with deciding what is and isn't a sentence, even before we discuss their lengths? In writing, sentences are generally denoted by a period (full stop), but in speaking, it is less clear. If an utterance trails off with a "and then..." and then silence, is this the end of a "sentence" or just the end of an utterance? If an entire spoken narrative, lasting perhaps 15 minutes, is linked with "and then," "and uh," "and so," "so then," etc. with each clause (information chunk) linked to the next in a loose way, where are the sentence boundaries? Can we say that the entire story is a single sentence? Am I overlooking something obvious? In what sense is the length of "sentences" a decidable issue (let alone a cognitively relevant one)? Eleanor Olds Batchelder From amnfn at WELL.COM Sun Jun 28 03:40:17 1998 From: amnfn at WELL.COM (A. Katz) Date: Sat, 27 Jun 1998 20:40:17 -0700 Subject: No subject Message-ID: Eleanor Olds Batchelder wrote: >Isn't there a problem with deciding what is and isn't a >sentence, even before we discuss their lengths? In writing, >sentences are generally denoted by a period (full stop), but in >speaking, it is less clear. If an utterance trails off with a >"and then..." and then silence, is this the end of a "sentence" >or just the end of an utterance? If an entire spoken narrative, >lasting perhaps 15 minutes, is linked with "and then," "and uh," >"and so," "so then," etc. with each clause (information chunk) >linked to the next in a loose way, where are the sentence >boundaries? Can we say that the entire story is a single >sentence? Language has been around longer than writing, but writing has been around considerably longer than punctuation. And a basic unit (often thought of as a sentence) is a functionally relevant factor in comprehension and processing of ancient texts, as well as modern day spoken utterances. The Old Testament in the original Hebrew is not punctuated. It is broken into verses, but the verses are not necessarily coterminus with sentence boundaries. Sometimes a sentence ends in the middle of a verse. Sometimes a sentence continues into the next verse. If the reality of sentences were only a question of arbitrarily marked punctuation as a formal literary device, then we could dispense with identifying sentence boundaries. But in fact, where the sentence boundary is has implications for comprehension. As a speaker of Hebrew, I instinctively identify where the sentence breaks are. When I read aloud, I pause there. Other prosodic cues are also involved, such as sentence intonation. When I taught Biblical Hebrew, sometimes beginning students who were reading a verse had difficulty identifying the sentence break in the middle, and I had to point it out to them. After I had done so, they were able to understand the verse. Before it was pointed out, they had trouble parsing out the grammatical roles. (Despite the fact that Hebrew is a highly inflected language.) How do native speakers identify sentence breaks in unpunctuated written texts? Through grammatical marking, contextual cues and repeating stylistic patterning. How do we identify sentence breaks in spoken language? Through intonational patterning, pauses, extralinguistic cues --- plus all of the above. Do people ever get confused about where the sentence break might be? Sure. But the confusion merely highlights the cognitive signifcance of the sentence break as an information bearing variable. As in the old standby: "No don't stop" interpreted variously as "No! Don't! Stop!" or "No, don't stop!" Of course, you have a point that merely adding a conjunction to the beginning of every sentence does not create a one sentence text. (The Old Testament could be interpreted that way in many places, but we all know that those aren't real conjunctions. They're discourse markers and temporal inverters.) The real measure of a sentence from a cognitive perspective is not determined on strictly formal grounds. Speakers give ample clues of what their real processing units are by body language and prosody -- and those clues do not always agree with what a formal grammarian might have told us about where the sentence begins and ends. Punctuation in modern writing is an idealization of a cognitive phenomenon. But from a communicative perspective, the sentence as a basic unit is very real. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From chafe at HUMANITAS.UCSB.EDU Sun Jun 28 23:30:17 1998 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Sun, 28 Jun 1998 16:30:17 -0700 Subject: Novelty vs. Expandability (of What?) Message-ID: I'd like to express some support for Doug Oliver and Eleanor Batchelder. Sorry this has to be long. It's a good idea at the beginning, I think, to separate the issues of (1) novelty, (2) expandability (how long or complex "sentences" or whatever can be), and (3) the question of what kinds of units are involved anyway (should we always be talking about sentences?). So far as novelty is concerned, in the middle of May I heard someone say "Frank Sinatra died." I don't believe I had ever heard that sentence before, and yet I knew what it meant (up to a point). And I suppose that the person who uttered it had never said it before (at least before that day), and yet she was able to produce it without difficulty (or at least without grammatical difficulty). It was novel, but hardly very long or complex. There doesn't seem to be any marvelous new insight here. Why would anyone find anything surprising about this kind of novelty? If I understand what Esa Itkonen was getting at with his original question, it was that the idea expressed as "Frank Sinatra", the idea of dying, and the way those ideas were put together to say "Frank Sinatra died" were all totally familiar to speakers of English. The only thing that was new was their combination. There's one more thing that should be said regarding novelty. There's an important sense in which just about everything people say is novel, simply because every human experience is unique. If I say, for example, "I saw a deer this morning," I may have used exactly that sentence (as a sequence of words etc.) before, but on each occasion the experience communicated was different: a different mental image, a different deer, perhaps a different emotion, probably a different location, and certainly a different time. We have to plug each unique experience into our limited linguistic resources in order to talk, and to some extent in order to think (and, yes, we do that differently in different languages). The point is that the experiences underlying language are always in some respects novel, even though the form of the language may be the same. Such considerations aside, if examples like "Frank Sinatra died" were the whole story, one could think of language as providing a number of patterns and a (very large) number of lexical items that could be inserted into those patterns. It's not that one learns language just by memorizing sentences, although I'm sure that happens too. People don't get enough credit for their huge memories, which certainly extend beyond individual words. But of course one also learns patterns and lexical items and is able to combine them. Although those patterns and lexical items can change and be augmented over time, at any one moment in an individual's life the number of possible combinations is vast, but finite. Plenty of "novel sentences" are easily available without much fancy footwork. What Chomsky came up with in the 1950s was the idea that the patterns could be expanded without limit, and not diachronically but synchronically. At that point there was already a certain gap between theory and observation. While the kind of novelty illustrated by "Frank Sinatra died" is intuitively obvious and can in fact be observed all around us, the kind of novelty provided by infinite recursion is by definition impossible to observe. What do we find when we examine how people actually speak? We find for one thing that sentences are something of a problem, if nothing else because prosody and syntax don't always coincide. Ignoring that problem, we do find that sentences, whatever they may be, vary from very short to very long. My own finding (and I've looked at a lot of ordinary speech with this in mind) has been that people insert sentence boundaries whenever they decide (on the fly, often for some passing reason) that some kind of closure has been reached in the flow of ideas. It's an on-line decision and, judging from repetitions of the same content by the same person on different occasions (a very worthwhile kind of data to examine), sentences don't seem to, or need not, reflect units of mental storage. On-line decisions about closure are interesting, but there's more to language structure than that. Sentences are intermediate in length between smaller prosodic phrases (expressing foci of active consciousness) and expressions of larger discourse topics (with material in peripheral consciousness), both of which are subject to interesting cognitive constraints that don't apply to sentences per se. Prosodic phrases ("intonation units") are subject to what I've called the one-new-idea constraint, which keeps them from getting very big. I think it has a much more important effect on the shape of language than George Miller's 7 +/- 2 constraint, as I've tried to show in numerous places. Topics may be short or long, but the interesting thing is that, once a topic has been opened in a conversation, there's an expectation that it will sooner or later be closed, after which another one can begin. Opening a topic is like creating an open parenthesis that demands eventual closure. Topics are what keep language moving. There's a great deal to be said about this, but here I might just point out here that the ludic analogy is more relevant to topics than to sentences. The length of a tennis game seems especially apt. Leaving aside the prolongation of a game through repeated deuces, how many times can the ball cross the net before a point is scored? Limits on skill and stamina would seem to keep the number within asymptotic bounds, but any arbitrary limit might in theory always be extended by one. Topics are like that. No topic goes on all day, but it's impossible to assign anything but an arbitrary limit to topic size. Sentences are usually properly contained within a topic, but on rare occasions they may expand to be coextensive. In terms of clauses, that can happen in a trivial way through the use of "and" to link every clause. Prosodically it can be done by postponing a falling pitch until the topic is concluded. I've observed this with 10-year-old boys, when they repeat the currently popular question intonation at the end of every phrase before finally letting their pitch fall when I'm about ready to go home. I found it also with a couple of our "pear stories", where the film was described with what sounded like a shopping list of events that didn't end until the narrative was finished. My general plea is that we distinguish novelty from expandability, and that we move beyond the rather special and sometimes puzzling strings of words that have been called "sentences," as if they were all that language had to offer, to a broader concern for the richness of what happens when people actually speak. --Wally Chafe From amnfn at WELL.COM Mon Jun 29 13:50:24 1998 From: amnfn at WELL.COM (A. Katz) Date: Mon, 29 Jun 1998 06:50:24 -0700 Subject: No subject Message-ID: I agree with the general purport and spirit of Wally Chafe's remarks, but I'd like to comment on the context of this specific debate. It is undoubtedly true that the sentence is not the be all and end all of language, and there are many other aspects to explore that are perhaps much more interesting. There are even speakers who make very scant use of this particular linguistic unit. But sentences do have communicative reality, and every once in a while it's a good idea to remind ourselves of this rather basic fact. With all the revisionism of recent years, there are actually new linguists coming into the field who may believe that sentences are a totally arbitrary unit devised by formalists to confound us. The sentence is a classic concept, and along with other ancient artifacts, it may not be very fashionable at the moment. I'd like to draw an analogy from poetry. The modernist movement has left behind metrical form and eschews rhyme. Many readers are encouraged to assume that what distinguishes poetry from prose is how the words are arranged on a page -- just as many laymen are led to believe that you know a sentence is over when you get to the period. The fallacy in such a position was brought home to me one day as a child when in the middle of a novel, which was written in prose, I suddenly stumbled onto a poem embedded into a paragraph. There was nothing in the way the thing was typeset or arranged on the page to indicate that it wasn't just another chunk of prose. There were no line breaks, just sentences ending in periods, followed by more sentences ending in periods -- but the thing scanned and rhymed and I was amazed, because it cried out to me: "I'm a poem!" I felt the meter; I could have told you where the line breaks should have gone -- and I suddenly realized that how it looks on the page has nothing to do with whether it's poetry or prose. The overwhelming reality of the sentence as a unit was brought home to me when I started teaching beginning language courses. I saw firsthand how without a basic understanding of the language, students could not recognize sentence boundaries. And without the sentence boundaries, they could not decipher the propositional value of an utterance, even when they recognized all the words. But what happens when people actually speak? When you go out into the field and record what people are really doing, isn't that when the scales fall from your eyes and you realize that sentences are just an illusion? Some speakers slur their speech so badly that even word boundaries are very hard to make out. Some never finish a sentence, but let it trail off, leaving it up to their interlocutor to complete the utterance. There are those with conflicting prosodic and grammatical cues as to where the sentence ends. And yes, some informants when asked to relate a story will give you a laundry list, instead. So what? Whoever said we all had to be equally good at it? Well, maybe Chomsky, with his notions of absolute native speaker competence. But there's no functionalist principle to suggest absolute equality of facility with language. Ngoni Chipere apparently has experimental data to show that native speakers do not necessarily out-perform foreigners in deciphering complex novel sentences. And if language use is related to generalized processing ability, there's no reason to suppose that they should. Fieldworkers know that not every native speaker informant is equally good. And the informants can tell you that themselves. They recognize who is a more eloquent speaker among them or a better storyteller. That, I think, is the ultimate measure of the reality of any linguistic unit: not whether every speaker makes use of it, but whether other speakers find it easier to understand those who do. There are normal, healthy, intelligent people in every community who are nevertheless incapable of completing a sentence. They get along fine, because there's a lot more to human communication than propositional value coded on the sentence level. But other speakers invariably find it much easier to understand those who enunciate clearly, produce complete sentences and use prosodic cues to mark sentence breaks. I fully agree that we don't have to confine our inquiry as linguists to the sentence level and that there are many discourse related issues that are far more interesting. I think that we should also agree that it's okay to talk about sentences some of the time. They have as much reality as any other unit. --Aya Katz \\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\// Dr. Aya Katz, 3918 Oak, Brookfield, Illinois 60513-2019 (708) 387-7596 //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ From cgenetti at HUMANITAS.UCSB.EDU Mon Jun 29 16:51:17 1998 From: cgenetti at HUMANITAS.UCSB.EDU (Carol Genetti) Date: Mon, 29 Jun 1998 09:51:17 -0700 Subject: sentences Message-ID: Dear funknetters, Pardon me for coming in on this discussion late -- with a six month old baby and two students about to leave for their first field trip to Nepal, I've been deleting list mail without reading it. BUT, I'm very interested in the notion of sentence and would like to put in my two cents worth. I work on Indo-Aryan and Tibeto-Burman languages of the Himalayas. These languages are all verb-final, and have the standard features that correlate with verb-final order, especially "clause chaining" or "converb constructions" (whether or not these are really separate is an interesting question). In the languages I have looked at extensively, Dolakha Newar and Nepali, sentence boundaries are quite clear, marked by the presence of a finite verb, after which may be a number of discourse particles and/or postposed elements. The structure of these sentences may be quite complex, entailing multiple levels of embedding, as well as chaining structures. It is clear that the sentences are unified wholes and that they are significant syntactic units that speakers attend to and manipulate as they produce spoken discourse. I've written two papers (both in press) which examine different issues of sentencehood. (Both papers examine narrative -- conversational data is very interesting, and units are more likely to be left incomplete -- I still think that the notion of sentence is relevant there as well.)Both papers give a lot of syntactic argumentation for what a sentence is, particularly with reference to embedded quotation. The two papers haVE different emphases, goals, and languages. In one, co-authored with Keith Slater, we looked at sentences, clause boundaries and intonation. We propose that there are "prosodic sentences", consisting of a series of prosodic units with non-final intonation, and ending in a unit with final intonation (analogous to clause-chaining structures). Prosodic sentences may be internally complex and involve embedding. Prosodic sentences and syntactic sentences generally co-terminate, but there a number of other interesting patterns as well. We also use the term "narrative sentence" which seem to be the units speakers most clearly delineate, but which may not have final marking at either the prosodic or syntactic levels. This paper contains a complete narrative (a folk rendition of the beginning of the Mahabharata) intonationally transcribed, glossed, and extensively annotated. If anyone wants a copy, just let me know! A second paper which concerns the sentence I co-authored with Laura Crain. It looks at issues of preferred argument structure in Nepali, and demonstartates that the amount and type of nominal reference is based not on the clause in this language, but on the sentence. We found that speakers have a preference to make one overt mention of each referent one time in a sentence, regardless of the number of times the referent occurs as a verbal argument in the sentence (and regardless of the discourse prominence of the referent). Thus sentences are key units that speakers are aware of and manipulate. This paper backs up the central claims of PAS theory, namely that grammar and discourse patterns are complementary, and shows that the typological facts of Nepali favor the discourse patterns found. This paper is to appear in Du Bois et al. Again, if anyone is interested in obtaining a copy, just let me know. So, regarding the relevance of the sentence, I think language typology is a significant factor. Languages differ considerably in their syntactic patterns, and the relevance of any syntactic unit, in particular the sentence, will vary with the typology. -- Carol Genetti From chafe at HUMANITAS.UCSB.EDU Tue Jun 30 19:33:26 1998 From: chafe at HUMANITAS.UCSB.EDU (Wallace Chafe) Date: Tue, 30 Jun 1998 12:33:26 -0700 Subject: Sentences Message-ID: I'm afraid I introduced a red herring. I don't want to be remembered as a nonbeliever in sentences, and I didn't intend to elicit messages to the effect that "my language does have sentences." I did want to raise a question regarding novelty, and particularly the notion that novelty can be explained only in terms of infinite recursion within sentence structure. Along the way I suggested that there may be other elements in language that have interesting constraints too, and that those constraints may interact with, and perhaps be different in nature from constraints on sentences. I suspect that this subject is too ramified to be discussed adequately in e-mail messages. --Wally Chafe