From lists at chaoticlanguage.com Tue Jul 1 00:21:48 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Tue, 1 Jul 2008 08:21:48 +0800 Subject: Rules vs. lists In-Reply-To: <4868D61D.5060608@sil.org> Message-ID: Any number at all? How do you justify that David? How are you generalizing your examples to abstract a rule? -Rob On Mon, Jun 30, 2008 at 8:48 PM, David Tuggy wrote: > Any number that language users come up with, in either case. More-general > and less-general rules, and whether or not there is one, or a set of, global > generalizations to be made. > > --David Tuggy > > Rob Freeman wrote: >> >> Dear All, >> >> I was glad to find the recent rule vs. list discussion. I like to see >> fundamental issues debated. >> >> In line with my own particular interests I hope I can shine a new >> perspective. >> >> Can anyone tell me, if the utterances of natural language are thought >> of primarily as a list, how many ways might the elements of that list >> be generalized to abstract one or other rule? >> >> That is to say, how many rules might a list of elements define in >> principle? Also how many partial rules, if the requirement of global >> generalizability is dropped? >> >> -Rob Freeman From lists at chaoticlanguage.com Tue Jul 1 08:41:28 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Tue, 1 Jul 2008 16:41:28 +0800 Subject: Rules vs. lists In-Reply-To: <48699FD9.2050304@sil.org> Message-ID: David, Nice papers. Thanks! When you said "any number" of rules I thought you were going to argue abstracted rules could be completely arbitrary. Instead I think you are just saying the number can be very, very large. And that I agree with. Most importantly, I think the number of rules which can be abstracted can be larger than the number of examples they are abstracted from. If we think about this I believe it is an important result. The storage of each extra example need not be seen as a cost. For each new example we store, we can get an even greater number of rules/generalizations for the system. (Each new example can be related to multiple others, to get multiple new "rules".) For N examples, we can have >> N rules. I'm not sure if your paper is arguing for this. In the light of it lists seem a very powerful way to specify grammar to me. Not to mention explaining certain idiosyncratic and inconsistent aspects of grammar. In practice we have not used lists in this way. Any idea why not? Beyond that I don't entirely understand the importance of Cognitive Grammar for your analysis. Why is it necessary for generalizations to become entrenched before they can be thought of as being "part of the grammar"? Couldn't any meaningful generalization which might be abstracted from a set of examples already be considered to be "part of the grammar", independently of whether it later becomes entrenched? -Rob On Tue, Jul 1, 2008 at 11:09 AM, David Tuggy wrote: > If language users list/learn a generalization, why should I deny it them? > > As to how I personally do it: > http://www.sil.org/~tuggyd/Scarecrow/SCARECRO.htm is over twenty years old, > but I haven't seen any reason to shift from the basic position it sets > forth. See especially Fig 3, and Fig 5; note in Fig 5 how the V+O=S rule > (which Fig 3 shows to be itself a generalization over generalizations) is a > subcase of at least 6 higher-level generalizations. The same sort of thing > works with syntax or phonology as well. > > (A more recent version, contrasting the English set of rules/generalizations > with comparable Spanish ones, is available at > http://www.um.es/ijes/vol3n2/03-Tuggy.pdf ; there's a Spanish version and > powerpoint available too at www.sil.org/~tuggyd.) > > Do I know that all English speakers have abstracted all these rules? I > don't, in any absolute sense. But if they (any/most/a fortiori all of them) > have, I want them in my grammar. > > Note that there is a globally-general rule: Optionally add something to > something else to make a word. But the interesting stuff is the not-totally > general rules (like X+Y=Y "structure with the rightmost element as head"), > and the specific learned forms that prompt them. > > --David T From iadimly at usc.es Tue Jul 1 20:09:22 2008 From: iadimly at usc.es (=?iso-8859-1?Q?Mar=EDa_=C1ngeles_G=F3mez?=) Date: Tue, 1 Jul 2008 22:09:22 +0200 Subject: NEW BOOK: LANGUAGES AND CULTURES IN CONTRAST AND COMPARISON, Pragmatics & Beyond New Series 175 Message-ID: New book: Title: Languages and Cultures in Contrast and Comparison Publication Year: 2008 Publisher: John Benjamins, Pragmatics & Beyond New Series, 175 Book URL: http://www.benjamins.com/cgi-bin/t_bookview.cgi?bookid=P%26bns%20175 Editor: María de los Ángeles Gómez González, J. Lachlan Mackenzie & Elsa M. González Álvarez Hardbound: ISBN: 978 90 272 5419 1 Pages: xxii, 364 pp. Price: / EUR 105.00 / USD 158.00 Abstract: This volume explores various hitherto under-researched relationships between languages and their discourse-cultural settings. The first two sections analyze the complex interplay between lexico-grammatical organization and communicative contexts. Part I focuses on structural options in syntax, deepening the analysis of information-packaging strategies. Part II turns to lexical studies, covering such matters as human perception and emotion, the psychological understanding of 'home' and 'abroad', the development of children's emotional life and the relation between lexical choice and sexual orientation. The final chapters consider how new techniques of contrastive linguistics and pragmatics are contributing to the primary field of application for contrastive analysis, language teaching and learning. The book will be of special interest to scholars and students of linguistics, discourse analysis and cultural studies and to those entrusted with teaching European languages and cultures. The major languages covered are Akan, Dutch, English, Finnish, French, German, Italian, Norwegian, Spanish and Swedish. ******************************************* María de los Ángeles Gómez González Full Professor of English Language and Linguistics Academic Secretary of Department English Department University of Santiago de Compostela Avda. Castelao s/n E-15704 Santiago de Compostela. Spain Tel: +34 981 563100 Ext. 11856 Fax: +34 981 574646 research team website: http://www.usc.es/scimitar/inicio.html. At present we are working on the update of this web page. From hdls at unm.edu Tue Jul 1 22:33:41 2008 From: hdls at unm.edu (High Desert Linguistics Society) Date: Tue, 1 Jul 2008 16:33:41 -0600 Subject: HDLS-8 2nd Call for Papers Message-ID: Hello everyone! Below you will find the second call for papers for the Eighth High Desert Linguistics Society Conference. I have also attached the call for papers in .doc format. Please pass this along to anyone who may be interested. Thank you! _____________________________________________________________________ The Eighth High Desert Linguistics Society Conference (HDLS-8) will be held at the University of New Mexico, Albuquerque, November, 6-8, 2008. Keynote speakers Sherman Wilcox (University of New Mexico) Marianne Mithun (University of California, Santa Barbara) Gilles Fauconnier (University of California, San Diego) We invite you to submit proposals for 20-minute talks with 5-minute discussion sessions in any area of linguistics – especially those from a cognitive / functional linguistics perspective. This year we will include a poster session. Papers and posters in the following areas are particularly welcome: * Evolution of Language, Grammaticization, Metaphor and Metonymy, Typology, Discourse Analysis, Computational Linguistics, Language Change and Variation * Native American Languages, Spanish and Languages of the American Southwest, Language Revitalization and Maintenance * Sociolinguistics, Bilingualism, Signed Languages, First Language Acquisition, Second Language Acquisition, Sociocultural Theory The deadline for submitting abstracts is Friday, August 22nd, 2008. Abstracts should be sent via email, as an attachment, to hdls at unm.edu.Please include the title ''HDLS-8 abstract '' in the subject line. Include the title “HDLS-8 Poster Session” in the subject line for abstracts submitted for the poster session. MS-Word format is preferred; RTF and PDF formats are accepted. You may also send hard copies of abstracts (three copies) to the HDLS address listed at the bottom of the page. The e-mail and attached abstract must include the following information: 1. Author's name(s) 2. Author's affiliation(s) 3. Title of the paper or poster 4. E-mail address of the primary author 5. A list of the equipment you will need 6. Whether you will require an official letter of acceptance The abstract should be no more than one page in no smaller than 11-point font. A second page is permitted for references and data. Only two submissions (for presentations) per author will be accepted and we will only consider submissions that conform to the above guidelines. If your abstract has special fonts or characters, please send your abstract as a PDF. Please be advised that shortly after the conference a call for proceedings will be announced. Poster Session - Participants will be given a space approximately 6' by 4' to display their work. Notification of acceptance will be sent out by September 2nd , 2008. If you have any questions or need for further information please contact us at hdls at unm.edu with ''HDLS-8 Conference'' in the subject line. You may also call Grandon Goertz, 505-277-6764 or Evan Ashworth, 505-228-4751. The HDLS mailing address is: HDLS, Department of Linguistics, MSC03 2130, 1 University of New Mexico, Albuquerque, NM. 87131-0001 USA From lists at chaoticlanguage.com Wed Jul 2 02:23:03 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Wed, 2 Jul 2008 10:23:03 +0800 Subject: Rules vs. Lists In-Reply-To: <486A440E.8060205@sil.org> Message-ID: Hi David, I agree you can see the now extensive "usage-based" literature as a historical use of "lists" to specify grammar. That is where this thread came from after all. But there is surely something missing. As you say, your own papers go back 20 or more years now. Why is there still argument? And the "rules" folks are right. A description of language which ignores generalizations is clearly incomplete. We don't say only what has been said before. And yet if you can abstract all the important information from lists of examples using rules, why keep the lists? So the argument goes round and round. What I think is missing is an understanding of the power lists can give you... to represent rules. That's why I asked how many rules you could abstract from a given list. While we believe abstracting rules will always be an economy, there will be people who argue lists are unnecessary. And quite rightly, if it were possible to abstract everything about a list of examples using a smaller set of rules, there would be no need to keep the examples. The point I'm trying to make here is that when placed in contrast with rules, what is important about lists might not be that you do not want to generalize them (as rules), but that a list may be a more compact representation for all the generalizations which can be made over it, than the generalizations (rules) are themselves. You express it perfectly: "...if you have examples A, B, C and D and extract a schema or rule capturing what is common to each pair, you have 6 potential rules, (AB, AC, AD, BC, BD, and CD), so sure, in theory you could have more rules than subcases." It is this power which I am wondering has never been used to specify grammar. You say you wouldn't "expect" to find this true in practice: "I wouldn't expect to find more rules than examples". But has anyone looked into it? It is possible in theory. Has anyone demonstrated it is not the case for real language data? Consider for a minute it might be true. What would that mean for the way we need to model language? I'll leave aside for a moment the other point about the importance of the concept of entrenchment from Cognitive Grammar. I think the raw point about the complexity, power, or number, of rules which can be generalized from a list of examples is the more important for now. I'd like to see arguments against this. -Rob On Tue, Jul 1, 2008 at 10:49 PM, David Tuggy wrote: > Thanks for the kind words, Rob. > > Right—I am not saying the number is totally unconstrained by anything, > though I agree it will be very, very large. What constrains it is whether > people learn it or not. > > Counting these things is actually pretty iffy, because (1) it implies a > discreteness or clear distinction of one from another that doesn't match the > reality, (2) it depends on the level of delicacy with which one examines the > phenomena, and (3) these things undoubtedly vary from one user to another > and even for the same user from time to time. The convention of representing > the generalization in a separate box from the subcase(s) is misleading in > certain ways: any schema (generalization or rule) is immanent to all its > subcases —i.e. all its specifications are also specifications of the > subcases—, so a specific case cannot be activated without activating all the > specifications of all the schemas above it. The relationship is as close to > an identity relationship as you can get without full identity. (It is a if > not the major meaning of the verb "is" in English: a dog *is* a mammal, > running *is* bipedal locomotion, etc.) > > Langacker (2007: 433) says that "counting the senses of a lexical item is > analogous to counting the peaks in a mountain range: how many there are > depends on how salient they have to be before we count them; they appear > discrete only if we ignore how they grade into one another at lower > altitudes. The uncertainty often experienced in determining which particular > sense an expression instantiates on a given occasion is thus to be expected. > …" > > If you do a topographical map a altitude intervals of one inch you will have > an awful lot of peaks. Perhaps even more rules than the number of examples > they're abstracted from. But normally, no, I wouldn't expect to find more > rules than examples, rather, fewer. It generally takes at least two examples > to clue us (more importantly as users than as linguists, but in either case) > in to the need for a rule, and the supposition that our interlocutors will > realize the need for that rule as well, and establish it (entrench it) in > their minds. Of course, as you point out, if you have examples A, B, C and D > and extract a schema or rule capturing what is common to each pair, you have > 6 potential rules, (AB, AC, AD, BC, BD, and CD), so sure, in theory you > could have more rules than subcases. Add in levels of schemas (rules > capturing what's common to AB-CD, AB-AC, ...) and you can get plenty of > rules. > > You wrote: In the light of [the possibility of more rules than > subcases]lists seem a very powerful way to specify grammar to me. Not to > mention explaining certain idiosyncratic and inconsistent aspects of > grammar. In practice we have not used lists in this way. Any idea why not? > > I'm not sure what you are saying here. If you're saying that listing > specific cases and ignoring omitting rules is enough, I disagree. If you're > saying that trying to specify grammar while ignoriing specific cases won't > work, I agree strongly. Listing specific cases is very important, as you > say, for explaining idiosyncratic and inconsistent aspects of the grammar > (as well as for other things, I would maintain.) I and many others have in > practice used lists in this way. (Read any of the Artes of Nahuatl or other > indigenous languages of Mexico from the XVI-XVII centuries: they have lots > of lists used in this way.) So I'm confused by what you're saying. > > The reason that generalizations must be entrenched is that (the grammar of) > a language consists of what has been entrenched in (learned by) the minds of > its users. If a linguist thinks of a rule, it has some place in his > cognition, but unless it corresponds to something in the minds of the > language's users, that is a relatively irrelevant fact. Cognitive Grammar > was important in that it affirmed this fact and in other ways provided a > framework in which the analysis was natural. > > --David Tuggy From amnfn at well.com Wed Jul 2 13:18:50 2008 From: amnfn at well.com (A. Katz) Date: Wed, 2 Jul 2008 06:18:50 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807011923u7573b978lc51e66a4d0f4bc48@mail.gmail.com> Message-ID: Rob, Here is where the concept of "functional equivalence" is very helpful. If two ways of describing a phenomenon give the same results, then they are functionally equivalent. That means that in essence, they are the same -- at least as far as results of calculation are concerned. (Considerations of processing limitations might show that one works better for a given hardware configuration than another, but that is a somewhat different issue.) Rules and lists are functionally equivalent. Logically speaking, they are the same. When there are more rules than examples of their application, we call it a list-based system. When there are many more examples of the application of a rule than different rules, then we call it a rule-based system. That's just about different methods of arriving at the same result, and is strictly a processing issue. In terms of describing the language, rather than the speakers, however, there is no difference. It's all the same. In order to appreciate this, we have to be able to distinguish the structure of the language from the structure of the speaker. Best, --Aya On Wed, 2 Jul 2008, Rob Freeman wrote: > Hi David, > > I agree you can see the now extensive "usage-based" literature as a > historical use of "lists" to specify grammar. That is where this > thread came from after all. > > But there is surely something missing. As you say, your own papers go > back 20 or more years now. Why is there still argument? > > And the "rules" folks are right. A description of language which > ignores generalizations is clearly incomplete. We don't say only what > has been said before. > > And yet if you can abstract all the important information from lists > of examples using rules, why keep the lists? > > So the argument goes round and round. > > What I think is missing is an understanding of the power lists can > give you... to represent rules. That's why I asked how many rules you > could abstract from a given list. > > While we believe abstracting rules will always be an economy, there > will be people who argue lists are unnecessary. And quite rightly, if > it were possible to abstract everything about a list of examples using > a smaller set of rules, there would be no need to keep the examples. > > The point I'm trying to make here is that when placed in contrast with > rules, what is important about lists might not be that you do not want > to generalize them (as rules), but that a list may be a more compact > representation for all the generalizations which can be made over it, > than the generalizations (rules) are themselves. > > You express it perfectly: > > "...if you have examples A, B, C and D and extract a schema or rule > capturing what is common to each pair, you have 6 potential rules, > (AB, AC, AD, BC, BD, and CD), so sure, in theory you could have more > rules than subcases." > > It is this power which I am wondering has never been used to specify grammar. > > You say you wouldn't "expect" to find this true in practice: "I > wouldn't expect to find more rules than examples". But has anyone > looked into it? It is possible in theory. Has anyone demonstrated it > is not the case for real language data? > > Consider for a minute it might be true. What would that mean for the > way we need to model language? > > I'll leave aside for a moment the other point about the importance of > the concept of entrenchment from Cognitive Grammar. I think the raw > point about the complexity, power, or number, of rules which can be > generalized from a list of examples is the more important for now. > > I'd like to see arguments against this. > > -Rob > > On Tue, Jul 1, 2008 at 10:49 PM, David Tuggy wrote: > > Thanks for the kind words, Rob. > > > > Right�I am not saying the number is totally unconstrained by anything, > > though I agree it will be very, very large. What constrains it is whether > > people learn it or not. > > > > Counting these things is actually pretty iffy, because (1) it implies a > > discreteness or clear distinction of one from another that doesn't match the > > reality, (2) it depends on the level of delicacy with which one examines the > > phenomena, and (3) these things undoubtedly vary from one user to another > > and even for the same user from time to time. The convention of representing > > the generalization in a separate box from the subcase(s) is misleading in > > certain ways: any schema (generalization or rule) is immanent to all its > > subcases �i.e. all its specifications are also specifications of the > > subcases�, so a specific case cannot be activated without activating all the > > specifications of all the schemas above it. The relationship is as close to > > an identity relationship as you can get without full identity. (It is a if > > not the major meaning of the verb "is" in English: a dog *is* a mammal, > > running *is* bipedal locomotion, etc.) > > > > Langacker (2007: 433) says that "counting the senses of a lexical item is > > analogous to counting the peaks in a mountain range: how many there are > > depends on how salient they have to be before we count them; they appear > > discrete only if we ignore how they grade into one another at lower > > altitudes. The uncertainty often experienced in determining which particular > > sense an expression instantiates on a given occasion is thus to be expected. > > �" > > > > If you do a topographical map a altitude intervals of one inch you will have > > an awful lot of peaks. Perhaps even more rules than the number of examples > > they're abstracted from. But normally, no, I wouldn't expect to find more > > rules than examples, rather, fewer. It generally takes at least two examples > > to clue us (more importantly as users than as linguists, but in either case) > > in to the need for a rule, and the supposition that our interlocutors will > > realize the need for that rule as well, and establish it (entrench it) in > > their minds. Of course, as you point out, if you have examples A, B, C and D > > and extract a schema or rule capturing what is common to each pair, you have > > 6 potential rules, (AB, AC, AD, BC, BD, and CD), so sure, in theory you > > could have more rules than subcases. Add in levels of schemas (rules > > capturing what's common to AB-CD, AB-AC, ...) and you can get plenty of > > rules. > > > > You wrote: In the light of [the possibility of more rules than > > subcases]lists seem a very powerful way to specify grammar to me. Not to > > mention explaining certain idiosyncratic and inconsistent aspects of > > grammar. In practice we have not used lists in this way. Any idea why not? > > > > I'm not sure what you are saying here. If you're saying that listing > > specific cases and ignoring omitting rules is enough, I disagree. If you're > > saying that trying to specify grammar while ignoriing specific cases won't > > work, I agree strongly. Listing specific cases is very important, as you > > say, for explaining idiosyncratic and inconsistent aspects of the grammar > > (as well as for other things, I would maintain.) I and many others have in > > practice used lists in this way. (Read any of the Artes of Nahuatl or other > > indigenous languages of Mexico from the XVI-XVII centuries: they have lots > > of lists used in this way.) So I'm confused by what you're saying. > > > > The reason that generalizations must be entrenched is that (the grammar of) > > a language consists of what has been entrenched in (learned by) the minds of > > its users. If a linguist thinks of a rule, it has some place in his > > cognition, but unless it corresponds to something in the minds of the > > language's users, that is a relatively irrelevant fact. Cognitive Grammar > > was important in that it affirmed this fact and in other ways provided a > > framework in which the analysis was natural. > > > > --David Tuggy > > From david_tuggy at sil.org Thu Jul 3 00:11:21 2008 From: david_tuggy at sil.org (David Tuggy) Date: Wed, 2 Jul 2008 19:11:21 -0500 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807011923u7573b978lc51e66a4d0f4bc48@mail.gmail.com> Message-ID: I'm afraid I'm not following you, Rob. Why is there still argument? Well, no reason *my* papers should have settled it, particularly, but from my viewpoint practically all the arguments I've seen have come because people still hanker after a simple choice of either rules or lists, but don't want to accept both. Why keep the lists? Because people (language users) do. And keep rules for the same reason. I don't know what you mean by "the power lists can give you … to represent rules". I don't think that "abstracting rules will always be an economy." I don't think we do it to be economical, but because we like to see what things that we know have in common, and (secondarily in some logical sense) we like to make other things like them. I don't believe we (or at least I) extract all the rules our data might support. I've had, and I expect all of us have had, repeatedly, the experience of having someone point out a generality and immediately sensing either (a) Of course, I already half knew that, and (b) Whoa! Really?? I would *never* have seen that! (But sure enough, it’s there in the data.) The (a) response fits for me with the idea that I do in fact have some rules in my head and can recognize at least some of them, even when they are less than fully conscious. The (b) experience fits with the idea that not all possible generalizations are of that type: already there and only needing to be wakened into consciousness. If it should turn out that there are, entrenched in the minds of users of a language, more rules than pieces of data by some metric applied at some level, it wouldn't shake me up very badly. (By my lights the "data" themselves are schemas: *everything* that constitutes a language is a pattern, i.e. is schematic, is a generalization over specifics, is a rule.) If you're trying to argue that the rules are generated anew (all of them) whenever needed, I don't see any reason to think that is true, and several reasons for not thinking it. I don't see why "the complexity, power, or number of rules which can be generalized" is the only important point: to me the complexity, power or number of rules that actually are generalized, and entrenched as conventional in users' minds, is at least as important. It is only those rules, not the potential ones, that constitute the languages they speak. But as I say, I'm not sure I'm understanding you. --David Tuggy Rob Freeman wrote: > Hi David, > > I agree you can see the now extensive "usage-based" literature as a > historical use of "lists" to specify grammar. That is where this > thread came from after all. > > But there is surely something missing. As you say, your own papers go > back 20 or more years now. Why is there still argument? > > And the "rules" folks are right. A description of language which > ignores generalizations is clearly incomplete. We don't say only what > has been said before. > > And yet if you can abstract all the important information from lists > of examples using rules, why keep the lists? > > So the argument goes round and round. > > What I think is missing is an understanding of the power lists can > give you... to represent rules. That's why I asked how many rules you > could abstract from a given list. > > While we believe abstracting rules will always be an economy, there > will be people who argue lists are unnecessary. And quite rightly, if > it were possible to abstract everything about a list of examples using > a smaller set of rules, there would be no need to keep the examples. > > The point I'm trying to make here is that when placed in contrast with > rules, what is important about lists might not be that you do not want > to generalize them (as rules), but that a list may be a more compact > representation for all the generalizations which can be made over it, > than the generalizations (rules) are themselves. > > You express it perfectly: > > "...if you have examples A, B, C and D and extract a schema or rule > capturing what is common to each pair, you have 6 potential rules, > (AB, AC, AD, BC, BD, and CD), so sure, in theory you could have more > rules than subcases." > > It is this power which I am wondering has never been used to specify grammar. > > You say you wouldn't "expect" to find this true in practice: "I > wouldn't expect to find more rules than examples". But has anyone > looked into it? It is possible in theory. Has anyone demonstrated it > is not the case for real language data? > > Consider for a minute it might be true. What would that mean for the > way we need to model language? > > I'll leave aside for a moment the other point about the importance of > the concept of entrenchment from Cognitive Grammar. I think the raw > point about the complexity, power, or number, of rules which can be > generalized from a list of examples is the more important for now. > > I'd like to see arguments against this. > > -Rob > > From wilcox at unm.edu Thu Jul 3 00:20:27 2008 From: wilcox at unm.edu (Sherman Wilcox) Date: Wed, 2 Jul 2008 18:20:27 -0600 Subject: Rules vs. Lists In-Reply-To: <486C1929.60504@sil.org> Message-ID: On Jul 2, 2008, at 6:11 PM, David Tuggy wrote: > (By my lights the "data" themselves are schemas: *everything* that > constitutes a language is a pattern, i.e. is schematic, is a > generalization over specifics, is a rule.) Ah, I love this. -- Sherman Wilcox From lists at chaoticlanguage.com Thu Jul 3 00:29:31 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Thu, 3 Jul 2008 08:29:31 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: Aya, You seem to be implying there is already a large body of literature addressing this. Do you have any references for what you describe as "list-based" systems ("more rules than examples of their application"), in particular with reference to language? For the system to be non-trivial the rules should be implicit in the examples. I particularly want to think about what such a system would look like from the point of view of the examples (e.g. surely it would mean each example would be subject to interpretation in more than one way, a given interpretation dependent on context, etc.) -Rob On Wed, Jul 2, 2008 at 9:18 PM, A. Katz wrote: > Rob, > > Here is where the concept of "functional equivalence" is very helpful. If > two ways of describing a phenomenon give the same results, then they are > functionally equivalent. That means that in essence, they are the same -- > at least as far as results of calculation are concerned. (Considerations > of processing limitations might show that one works better for a given > hardware configuration than another, but that is a somewhat different > issue.) > > Rules and lists are functionally equivalent. Logically speaking, they are > the same. > > When there are more rules than examples of their application, we call it a > list-based system. When there are many more examples of the application of > a rule than different rules, then we call it a rule-based system. > > That's just about different methods of arriving at the same result, and is > strictly a processing issue. In terms of describing the language, rather > than the speakers, however, there is no difference. It's all the same. > > In order to appreciate this, we have to be able to distinguish the > structure of the language from the structure of the speaker. > > Best, > > > --Aya From dharv at mail.optusnet.com.au Thu Jul 3 01:57:34 2008 From: dharv at mail.optusnet.com.au (dharv at mail.optusnet.com.au) Date: Thu, 3 Jul 2008 11:57:34 +1000 Subject: Rules vs. Lists In-Reply-To: <486C1929.60504@sil.org> Message-ID: At 7:11 PM -0500 2/7/08, David Tuggy wrote: >If it should turn out that there are, entrenched in the minds of >users of a language, Do rules exist in the minds of language users or the minds of linguists? -- David Harvey 60 Gipps Street Drummoyne NSW 2047 Australia Tel: 61-2-9719-9170 From lists at chaoticlanguage.com Thu Jul 3 02:08:40 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Thu, 3 Jul 2008 10:08:40 +0800 Subject: Rules vs. Lists In-Reply-To: <486C1929.60504@sil.org> Message-ID: Thanks for playing the good part in this David. Very few people will even listen to a new (unentrenched? :-) argument. We are really in very broad agreement. It is just that I think there is something extra. On Thu, Jul 3, 2008 at 8:11 AM, David Tuggy wrote: > I'm afraid I'm not following you, Rob. Language is fallible, on that we can agree! > ...I don't see why "the complexity, power, or > number of rules which can be generalized" is the only important point: to me > the complexity, power or number of rules that actually are generalized, and > entrenched as conventional in users' minds, is at least as important. It is > only those rules, not the potential ones, that constitute the languages they > speak. I don't think "the complexity, power, or number of rules which can be generalized" is the only important point. I am only focusing on it because I think it is an important point we have been missing. But since you are not contesting this core complexity point, perhaps I should look at the importance you attach to entrenchment. I don't really want to attack the importance of entrenchment. Undoubtedly it is an important mechanism. I just don't think it is the only one. As you say "I expect all of us have had, repeatedly, the experience of having someone point out a generality and immediately sensing either (a) Of course, I already half knew that, and (b) Whoa! Really?? I would *never* have seen that! (But sure enough, it's there in the data.)" It is these experiences I am talking about. I agree that once a generality becomes entrenched through repeated observation, especially when it assumes a "negotiated" symbolic value in a community, then people can communicate using it. It is just I also think people can communicate by pointing out generalities which are not yet entrenched, might never have been observed at all, and which you might never have suspected of the data. But yet as soon as such a generality is pointed out, it is immediately "meaningful" to you ("I already half knew that".) In short, I think entrenchment is an important mechanism, but we need to pay attention to all the unentrenched generalities implicit in a language also. A vast power of generalities implicit in the examples of a language (more than there are examples), and those generalities immediately meaningful should we happen to observe them (though of course to observe them all impossible in practice), and we have what I am suggesting is missing from our current models of language. (Especially models which contrast rules vs. lists.) -Rob From david_tuggy at sil.org Thu Jul 3 02:30:23 2008 From: david_tuggy at sil.org (David Tuggy) Date: Wed, 2 Jul 2008 21:30:23 -0500 Subject: Rules vs. Lists In-Reply-To: Message-ID: In the minds of language users, including linguists. Hopefully what linguists consciously analyze and posit as rules will be such as to parallel what is in users' minds. Otherwise they are less interesting objects, at least for those who had hopes of their illuminating what language is. Certainly one cannot assume that every rule a linguist has posited has an analogue in anyone else’s mind, much less that such an analogue is used in actual language processing. But many, I reckon, do and are. --David Tuggy dharv at mail.optusnet.com.au wrote: > At 7:11 PM -0500 2/7/08, David Tuggy wrote: > >> If it should turn out that there are, entrenched in the minds of >> users of a language, > > Do rules exist in the minds of language users or the minds of linguists? From lists at chaoticlanguage.com Thu Jul 3 06:54:51 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Thu, 3 Jul 2008 14:54:51 +0800 Subject: Rules vs. Lists In-Reply-To: <486C3DE0.4080409@sil.org> Message-ID: On Thu, Jul 3, 2008 at 10:48 AM, David Tuggy wrote: > > **Yes. Such "tumbling to" moments are actually, I am sure, quite common > during the time when kids are learning 10 new words a day or however many it > is that they absorb. I certainly can attest to them when learning a > second/third/etc. language. But (a) They were not part of my language until > I had them, and (b) once I'd had them they are on the way to being > entrenched as conventional. When I encounter the same, or similar data again > I will recognize it. I think these "tumbling to" (ah-ha?) moments happen, on some level, every time we say something new. Indeed I think they are a model for how we say new things (to answer your question "Why?") Once something new has been said, it is on its way to being conventionalized. Eventually the original "tumbling to" meaning may become ossified and even replaced. I agree this conventionalization aspect has been well modeled by CG. It is also important, but is already being done well. I won't question what CG tells us about the social, conventionalized character of language. I'm only suggesting people consider this "tumbling to" aspect to language. If it occurs, how many such new generalizations might be made given a certain corpus of language examples etc. What it seeks to model are things which can be said. Whether something which can be said, only becomes "part of my language" once I have said it, is surely only a matter of definition. Just to rewind and recap a little. The question at issue here is how many generalizations/rules can be made about a list of examples. In particular whether there can be more, many more than there are examples. And the implications this might have for what can be said in a language. -Rob From amnfn at well.com Thu Jul 3 13:32:03 2008 From: amnfn at well.com (A. Katz) Date: Thu, 3 Jul 2008 06:32:03 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807021729o3ac722b7n8113af0a0dbfe557@mail.gmail.com> Message-ID: Rob, No, I am not implying there is a vast body of literature on this topic. My assurance comes from logic. A list is a set of rules with a single example of the application of each rule. When we speak of a rule-based system, we mean one each rules has many examples of its application. When we speak of list-based system, we speak of a system where there are more rules than instances where they are applied. For instance, the multiplication table can be described either way. We can memorize each entry and describe it as a list. Or we can give a single rule, x times y is x plus x y times. They are functionally equivalent. You can get the right answer either way. But when we make a separate rule for each instance, we call that listing. When we allow a single rule to cover many instances, we call that rule-based. It's doesn't take any previous literature to determine this is so. It is so by definition. It's a tautology. Best, --Aya The grammar of a language could On Thu, 3 Jul 2008, Rob Freeman wrote: > Aya, > > You seem to be implying there is already a large body of literature > addressing this. > > Do you have any references for what you describe as "list-based" > systems ("more rules than examples of their application"), in > particular with reference to language? > > For the system to be non-trivial the rules should be implicit in the examples. > > I particularly want to think about what such a system would look like > from the point of view of the examples (e.g. surely it would mean each > example would be subject to interpretation in more than one way, a > given interpretation dependent on context, etc.) > > -Rob > > On Wed, Jul 2, 2008 at 9:18 PM, A. Katz wrote: > > Rob, > > > > Here is where the concept of "functional equivalence" is very helpful. If > > two ways of describing a phenomenon give the same results, then they are > > functionally equivalent. That means that in essence, they are the same -- > > at least as far as results of calculation are concerned. (Considerations > > of processing limitations might show that one works better for a given > > hardware configuration than another, but that is a somewhat different > > issue.) > > > > Rules and lists are functionally equivalent. Logically speaking, they are > > the same. > > > > When there are more rules than examples of their application, we call it a > > list-based system. When there are many more examples of the application of > > a rule than different rules, then we call it a rule-based system. > > > > That's just about different methods of arriving at the same result, and is > > strictly a processing issue. In terms of describing the language, rather > > than the speakers, however, there is no difference. It's all the same. > > > > In order to appreciate this, we have to be able to distinguish the > > structure of the language from the structure of the speaker. > > > > Best, > > > > > > --Aya > > From amnfn at well.com Thu Jul 3 13:59:30 2008 From: amnfn at well.com (A. Katz) Date: Thu, 3 Jul 2008 06:59:30 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807022354t192b0ab5y3d47c4a3a2e1bf3d@mail.gmail.com> Message-ID: Concerning memory and entrenchment, I think that the ability to memorize a list is related to the ability to derive the list in the first place. They are not as separate psychologically as some discussions among linguists seem to assume. It is easier to memorize what you understand, because memory isn't completely passive. People who are talented at subjects such as math, music and languages are often complimented on their good memories, because they are able to come up with the individual items on a list faster. (The list could be a series of numbers as in the multiplication table, or a series of notes, as in a musical composition, or a series of words -- such as the words to a poem). People who are less talented at these tasks attempt passive memory work and fail. Then they attribute great memory ability to those who surpass them at these tasks. But in fact, those who do well are the ones who are able to re-derive any item they may have forgotten in a split second. To know the multiplication table well does involve memory, but is helped by the ability to instantly re-derive any entry one may have forgotten. Great musicians do memorize the notes to a composition, but they are greatly aided by their ability to anticipate what comes next. They can instantly recompose any phrase they may have forgotten, because they understand the regularity behind the composition. When we memorize a poem written by someone else, we often rely on metrical rules and rhyme schemes to recompose any lines we may have forgotten. Even in ordinary conversation, when people employ idioms, cliches and set phrases, those who can rederive them, who understand how they are put together, are ultimately more successful at employing them to greater effect. Talking about greater and lesser abilities in language use by native speakers has become taboo among linguists -- but not among people who teach literature and foreign languages. My observations here come from my own experiences with language use and literature and from experiences as a teacher. I suspect that they are echoed by the experiences of others, but it's not likely that you will find articles written about this by linguists. Best, --Aya On Thu, 3 Jul 2008, Rob Freeman wrote: > On Thu, Jul 3, 2008 at 10:48 AM, David Tuggy wrote: > > > > **Yes. Such "tumbling to" moments are actually, I am sure, quite common > > during the time when kids are learning 10 new words a day or however many it > > is that they absorb. I certainly can attest to them when learning a > > second/third/etc. language. But (a) They were not part of my language until > > I had them, and (b) once I'd had them they are on the way to being > > entrenched as conventional. When I encounter the same, or similar data again > > I will recognize it. > > I think these "tumbling to" (ah-ha?) moments happen, on some level, > every time we say something new. > > Indeed I think they are a model for how we say new things (to answer > your question "Why?") > > Once something new has been said, it is on its way to being > conventionalized. Eventually the original "tumbling to" meaning may > become ossified and even replaced. I agree this conventionalization > aspect has been well modeled by CG. It is also important, but is > already being done well. > > I won't question what CG tells us about the social, conventionalized > character of language. I'm only suggesting people consider this > "tumbling to" aspect to language. If it occurs, how many such new > generalizations might be made given a certain corpus of language > examples etc. > > What it seeks to model are things which can be said. Whether something > which can be said, only becomes "part of my language" once I have said > it, is surely only a matter of definition. > > Just to rewind and recap a little. The question at issue here is how > many generalizations/rules can be made about a list of examples. In > particular whether there can be more, many more than there are > examples. And the implications this might have for what can be said in > a language. > > -Rob > > From lists at chaoticlanguage.com Thu Jul 3 22:31:23 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Fri, 4 Jul 2008 06:31:23 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: On Thu, Jul 3, 2008 at 9:32 PM, A. Katz wrote: > ...When > we speak of list-based system, we speak of a system where there are more > rules than instances where they are applied. Can you give me even one example of such a system, Aya? For the system to be non-trivial the rules should be implicit in the examples. -Rob From dlevere at ilstu.edu Thu Jul 3 23:53:46 2008 From: dlevere at ilstu.edu (Daniel Everett) Date: Thu, 3 Jul 2008 18:53:46 -0500 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807031531u5fa022bcic84a1648e888bcc5@mail.gmail.com> Message-ID: I just 'published' a short bit on this as part of a debate on EDGE, responding to work by Chris Anderson. The discussion as a whole, not so much my reply, might interest FUNKNET readers. -- Dan http://www.edge.org/discourse/the_end_of_theory.html -------------------------------------------------------------- This message was sent using Illinois State University Webmail. From amnfn at well.com Fri Jul 4 02:42:25 2008 From: amnfn at well.com (A. Katz) Date: Thu, 3 Jul 2008 19:42:25 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807031531u5fa022bcic84a1648e888bcc5@mail.gmail.com> Message-ID: On Fri, 4 Jul 2008, Rob Freeman wrote: > On Thu, Jul 3, 2008 at 9:32 PM, A. Katz wrote: > > ...When > > we speak of list-based system, we speak of a system where there are more > > rules than instances where they are applied. > > Can you give me even one example of such a system, Aya? I think I already mentioned the multiplication tables. A computer program (or a human mind) that handles the mutiplication tables by listing the answers in a table is a list based system. A computer program that uses a subroutine to solve the problems with variables for x and y (where x*y is being calculated) is a rule based system -- and the same goes for a human mind that does this. Both systems are functionally equivalent and can give correct results. Each serves different processing constraints -- memory versus speed of calculating. If you want to be shown situations unlike the multiplication table where the data being processed tends to require one or the other type of system, think about the rules for spelling English versus the rules for spelling Spanish. The Spanish spelling system lends itself to rules, as it is highly regular. The English spelling system lends itself to lists, as it is highly irregular. It's not that English spelling has no rules -- it's just there are so darned many of them, that for the most frequently used words it's almost as if there is a different rule for every word. Not quite, but almost. Best, --Aya > > For the system to be non-trivial the rules should be implicit in the examples. > > -Rob > > From fgk at ling.helsinki.fi Fri Jul 4 07:32:31 2008 From: fgk at ling.helsinki.fi (fgk) Date: Fri, 4 Jul 2008 10:32:31 +0300 Subject: Rules vs. Lists In-Reply-To: Message-ID: As for the purported lack of linguistic research on greater and lesser abilities in language use by native speakers, I recommend consulting the book "Understanding Complex Sentences. Native Speaker Variation in Syntactic Competence" by Ngoni Chipere (Palgrave Macmillan, New York 2003). Best, Fred Karlsson From ab.stenstrom at telia.com Fri Jul 4 09:01:25 2008 From: ab.stenstrom at telia.com (=?iso-8859-1?Q?Anna-Brita_Stenstr=F6m?=) Date: Fri, 4 Jul 2008 11:01:25 +0200 Subject: unsubscribe Message-ID: Hello, I've been trying in vain lots of times to unsubscribe. Could you help me please? Best, Anna-Brita Stenström ab.stenstrom at telia.com pixauv -- Jag använder gratisversionen av SPAMfighter för privata användare. 5065 spam har blivit blockerade hittills. Betalande användare har inte detta meddelande i sin e-post. Hämta gratis SPAMfighter här: http://www.spamfighter.com/lsv From lists at chaoticlanguage.com Fri Jul 4 10:30:10 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Fri, 4 Jul 2008 18:30:10 +0800 Subject: Rules vs. Lists In-Reply-To: <20080703185346.eueqyffmog80cwog@isuwebmail.ilstu.edu> Message-ID: That's actually a pretty good reference Dan. Thanks. For my taste there's a little too much emphasis on the practical efficacy of the approach, and not enough on why it might be so, but the general idea is along the same lines. Do you know of any discussions why this might be the case: why for some systems we might need to eschew theory and work directly with examples? My own take is that for some systems there may be more rules implicit in the examples than there are examples themselves. So, not so much "The End of Theory" as the birth of the theory that there can be lots more theories buried in a set of data than we've ever imagined we needed to look for before. But people really don't like this kind of meta-theory, so I'm trying to keep it as concrete as possible. That's why I'm focusing on the practical problem of counting the number of rules you can abstract from a given set of examples. If it turns out there are more rules than examples, then that is something concrete we can deal with. -Rob On Fri, Jul 4, 2008 at 7:53 AM, wrote: > > > I just 'published' a short bit on this as part of a debate on EDGE, > responding to work by Chris Anderson. > > The discussion as a whole, not so much my reply, might interest FUNKNET > readers. > > -- Dan > > http://www.edge.org/discourse/the_end_of_theory.html > > > -------------------------------------------------------------- > This message was sent using Illinois State University Webmail. From amnfn at well.com Fri Jul 4 12:49:01 2008 From: amnfn at well.com (A. Katz) Date: Fri, 4 Jul 2008 05:49:01 -0700 Subject: Rules vs. Lists In-Reply-To: <486DD20F.3000802@ling.helsinki.fi> Message-ID: Yes, thank you for pointing that out. I did in fact know that Ngoni Chipere had done some research on that in the late 90's, but I lost touch, and I did not know there was a book out. I will definitely get the book. Best, --Aya Katz On Fri, 4 Jul 2008, fgk wrote: > As for the purported lack of linguistic research on greater and lesser > abilities in > language use by native speakers, I recommend consulting the book > "Understanding Complex Sentences. Native Speaker Variation in > Syntactic Competence" by Ngoni Chipere (Palgrave Macmillan, New York 2003). > Best, > Fred Karlsson > > From lists at chaoticlanguage.com Fri Jul 4 23:52:37 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Sat, 5 Jul 2008 07:52:37 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: On Fri, Jul 4, 2008 at 10:42 AM, A. Katz wrote: > > On Fri, 4 Jul 2008, Rob Freeman wrote: > >> On Thu, Jul 3, 2008 at 9:32 PM, A. Katz wrote: >> > ...When >> > we speak of list-based system, we speak of a system where there are more >> > rules than instances where they are applied. >> >> Can you give me even one example of such a system, Aya? > > ... English spelling ... it's almost as if there is a different rule for every word. Not > quite, but almost. I'm grateful you are thinking about this Aya, and it is indeed what I am suggesting that natural language is of this form, but English spelling, as you say, is probably only almost this way, but not quite. I'm still not sure you see what I mean by a system which has more rules implicit in the examples than there are examples themselves. Can you show me a system where there are more rules implicit in the examples than there are examples themselves, and explain why it must be so? -Rob From amnfn at well.com Sat Jul 5 13:13:52 2008 From: amnfn at well.com (A. Katz) Date: Sat, 5 Jul 2008 06:13:52 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807041652x7f27364fv4ea7e15280cbf66d@mail.gmail.com> Message-ID: Rob, MORE rules implicit than examples? That's a stretch, as a list of unitary items has at most as many rules as examples. However, I suppose the English derivational system might provide something like that. If English lexemes are listed one at a time, and most English speakers are unaware that they have subparts (and I've done research on this -- monolingual English speakers are amazingly imperceptive about derivations that are obvious to the rest of us), and if you then add the derivational rules that might account for some of these words, then you have a system where each item is a rule in itself, and also some rules for deriving the items, so there are more rules than examples. But... this is only so if you try to conflate the derivational insensitivity of the average English speaker with the patterns implicit in the words. So, in fact, this is not different from the system where a mathematically innocent child memorizes a multiplication table of whose derivation he is completely unaware. Each item listed in the table is a rule, and as he grows older and wiser, he may discover that there are other rules whereby the table could be derived. It's kind of a cheat, because we are listing from the point of view of more than one speaker, so that two systems overlap. But because the knowledge of speakers can evolve over time, such an overlap is not psychologically improbable. Best, --Aya On Sat, 5 Jul 2008, Rob Freeman wrote: > On Fri, Jul 4, 2008 at 10:42 AM, A. Katz wrote: > > > > On Fri, 4 Jul 2008, Rob Freeman wrote: > > > >> On Thu, Jul 3, 2008 at 9:32 PM, A. Katz wrote: > >> > ...When > >> > we speak of list-based system, we speak of a system where there are more > >> > rules than instances where they are applied. > >> > >> Can you give me even one example of such a system, Aya? > > > > ... English spelling ... it's almost as if there is a different rule for every word. Not > > quite, but almost. > > I'm grateful you are thinking about this Aya, and it is indeed what I > am suggesting that natural language is of this form, but English > spelling, as you say, is probably only almost this way, but not quite. > > I'm still not sure you see what I mean by a system which has more > rules implicit in the examples than there are examples themselves. > > Can you show me a system where there are more rules implicit in the > examples than there are examples themselves, and explain why it must > be so? > > -Rob > > From vch468d at tninet.se Sat Jul 5 14:29:30 2008 From: vch468d at tninet.se (Jouni Maho) Date: Sat, 5 Jul 2008 16:29:30 +0200 Subject: Rules vs. Lists Message-ID: May a lurker butt in with a little thought experiment? Re Rob Freeman's: > > Can you show me a system where there are more rules implicit in the > examples than there are examples themselves, and explain why it must > be so? Assume the following (complete) lexicon of a hypothetical language: berama bilama butaba metama tilaba And the rules: C > b r m l t V > e a u i Root > CVCV V2 > a R+ba > agent R+ma > causative That is, 5 list items, 6 rules. This assumes, of course, that the types of rules can be of any "kind", i.e. morphological, phonological, etc. Or does the question suppose that there should be restrictions on type of rules (only morphological, only phonological, etc.)? I'm not sure how easy this would be if the lexicon's size was considerably larger, but at least it's possible to devise a less-items-more-rules system as a thought experiment. I have no idea why it should be so, but it's certainly pissoble. By the way, would a vowel-consonant inventory (list) with it's accompanying rules (phonotax, assimilation, etc.) count as a valid less-items-more-rules system? --- jouni maho From amnfn at well.com Sat Jul 5 15:06:48 2008 From: amnfn at well.com (A. Katz) Date: Sat, 5 Jul 2008 08:06:48 -0700 Subject: Rules vs. Lists In-Reply-To: <486F854A.8050805@tninet.se> Message-ID: I assume that "the system" under consideration would be all inclusive of every item and every level, so this seems fair, although it's Rob that is leading this discussion on more rules than examples. Jouni Maho, you are implying there are roots, so as well as the lexicon, there would be a list of roots, presumably, and these would add to the number of rules. If there are roots, then presumably each root could appear with each suffix, (unless there's an additional rule that says that they can't) and there should be more lexemes than you listed. The question that seems more interesting to me is: could there ever be a human language with only five lexemes? If there could, why haven't we found one like that? Language is an information bearing code. The number of contrasts helps determine the amount of information transmitted. If there are fewer phonemes, then words have to be longer. If there are more phonemes, the same information can be transmitted in shorter words. More grammatical syntax allows for the same information to be coded in shorter sentences, in terms of word count. Less grammatical morphology requires more words per sentence. It all evens out, based on a very simple calculation. Languages of the world deploy the same basic phonological inventory inherent in our physiology in different ways in order to transmit about the same amount of information per time unit. Every language codes for a certain amount of redundancy in order to deal with noise in the signal. Redundancy could be viewed as adding extra rules that don't directly help with transmission of information. Is that what you are getting at, Rob? Best, --Aya Katz On Sat, 5 Jul 2008, Jouni Maho wrote: > May a lurker butt in with a little thought experiment? > > Re Rob Freeman's: > > > > Can you show me a system where there are more rules implicit in the > > examples than there are examples themselves, and explain why it must > > be so? > > Assume the following (complete) lexicon of a hypothetical language: > > berama > bilama > butaba > metama > tilaba > > And the rules: > > C > b r m l t > V > e a u i > Root > CVCV > V2 > a > R+ba > agent > R+ma > causative > > That is, 5 list items, 6 rules. This assumes, of course, that the types > of rules can be of any "kind", i.e. morphological, phonological, etc. Or > does the question suppose that there should be restrictions on type of > rules (only morphological, only phonological, etc.)? > > I'm not sure how easy this would be if the lexicon's size was > considerably larger, but at least it's possible to devise a > less-items-more-rules system as a thought experiment. I have no idea why > it should be so, but it's certainly pissoble. > > By the way, would a vowel-consonant inventory (list) with it's > accompanying rules (phonotax, assimilation, etc.) count as a valid > less-items-more-rules system? > > --- > jouni maho > > From lists at chaoticlanguage.com Sun Jul 6 06:16:39 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Sun, 6 Jul 2008 14:16:39 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: On Sat, Jul 5, 2008 at 9:13 PM, A. Katz wrote: > > MORE rules implicit than examples? That's a stretch, as a list of unitary > items has at most as many rules as examples. Thanks. I wanted you to see that. It is something different from what we have considered before. Not impossible, just not something which has been considered, to my knowledge. > However, I suppose the English derivational system might provide something > like that. If English lexemes are listed one at a time, and most English > speakers are unaware that they have subparts (and I've done research on > this -- monolingual English speakers are amazingly imperceptive about > derivations that are obvious to the rest of us), and if you then add the > derivational rules that might account for some of these words, then you > have a system where each item is a rule in itself, and also some rules for > deriving the items, so there are more rules than examples. But... this is > only so if you try to conflate the derivational insensitivity of the > average English speaker with the patterns implicit in the words. > > ... > > It's kind of a cheat, because we are listing from the point of view of > more than one speaker, so that two systems overlap. But because the > knowledge of speakers can evolve over time, such an overlap is not > psychologically improbable. Yes, if you regard each example as a rule in itself, and yet have productive rules over them, then almost trivially you will have more rules than examples. I don't think we need conflate speakers to do this. Most of us will accept that there is something unique about almost every utterance, while finding productive regularities over them. But it is not hard to find an argument that even the number of productive rules might be greater than the number of examples. As David pointed out in an earlier message: "...if you have examples A, B, C and D and extract a schema or rule capturing what is common to each pair, you have 6 potential rules, (AB, AC, AD, BC, BD, and CD), so sure, in theory you could have more rules than subcases. Add in levels of schemas (rules capturing what's common to AB-CD, AB-AC, ...) and you can get plenty of rules." The question is do we? -Rob From lists at chaoticlanguage.com Sun Jul 6 06:18:03 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Sun, 6 Jul 2008 14:18:03 +0800 Subject: Rules vs. Lists In-Reply-To: <486F854A.8050805@tninet.se> Message-ID: Right Jouni. It is somewhat laborious to factor them out, but I think we can find lots of rules once we start to look for them, even if we restrict ourselves to one kind: morphological, phonological etc. Whether a vowel-consonant inventory would count as this kind of system is another question. Phonemes are not really examples. They are classes. The right question would be to compare the number of phonemes and phonotactic rules with the number of utterances. In this context I refer you to suggestions such as Syd Lamb's that we need to relax the "linearity requirement" for phonemes in combination. An issue which goes right back to the core of the dispute between structural and functional schools in linguistics. As to why it should be so, why we should want to have more rules and examples, it is perhaps not immediately obvious. But actually such a system gives us lots of power. Power we have simply been throwing away because we have assumed fewer rules than examples. If we can constantly draw new rules out of the examples we can use all those extra rules to parametrize our system. All we need to do is look for them. -Rob On Sat, Jul 5, 2008 at 10:29 PM, Jouni Maho wrote: > May a lurker butt in with a little thought experiment? > > Re Rob Freeman's: >> >> Can you show me a system where there are more rules implicit in the >> examples than there are examples themselves, and explain why it must >> be so? > > Assume the following (complete) lexicon of a hypothetical language: > > berama > bilama > butaba > metama > tilaba > > And the rules: > > C > b r m l t > V > e a u i > Root > CVCV > V2 > a > R+ba > agent > R+ma > causative > > That is, 5 list items, 6 rules. This assumes, of course, that the types of > rules can be of any "kind", i.e. morphological, phonological, etc. Or does > the question suppose that there should be restrictions on type of rules > (only morphological, only phonological, etc.)? > > I'm not sure how easy this would be if the lexicon's size was considerably > larger, but at least it's possible to devise a less-items-more-rules system > as a thought experiment. I have no idea why it should be so, but it's > certainly pissoble. > > By the way, would a vowel-consonant inventory (list) with it's accompanying > rules (phonotax, assimilation, etc.) count as a valid less-items-more-rules > system? > > --- > jouni maho From lists at chaoticlanguage.com Sun Jul 6 06:20:44 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Sun, 6 Jul 2008 14:20:44 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: Aya, We have to be careful what we regard as "examples". As I said to Jouni phonemes should be thought of as classes not examples. Similarly "roots", "lexemes", "morphemes" etc. When speculating there are more rules than examples the real question is not how many ways can you combine X no. of lexemes, but how many lexemes can you abstract from Y utterances. And before you do that you have to define what you mean by "lexeme". What we have at root are a number of utterances with a certain amount of variation between them. You need that variation to carry a signal, as you say. But the number of lexemes you allocate will depend on where you slice that variation. Which slice of variation you allocate to lexemes, which to phonemes etc. To an extent it will be arbitrary. The distinction between a phoneme and a lexeme is not so clear in, for instance, tone languages. That said, if we decide the slice of variation we allocate to lexemes corresponds broadly to conventionalized meanings, it seems reasonable to me that there will be a fairly consistent number across cultures (perhaps tending a bit higher in highly conservative cultures.) You could certainly get by with only five. Computers use only two. But I doubt there will ever be a culture sufficiently innovative that it will want to think of new things to say quite that often! So the question of how many lexemes is largely one of how we choose to label the regularities we find. What I am suggesting is more basic than that. I'm suggesting that maybe when we break down utterances we have more regularities than we have thought to look for before, however we choose to label them. I don't think it is a question of redundancy, though all that extra information could be used to make the signal more robust. -Rob On Sat, Jul 5, 2008 at 11:06 PM, A. Katz wrote: > I assume that "the system" under consideration would be all inclusive of > every item and every level, so this seems fair, although it's Rob that is > leading this discussion on more rules than examples. > > Jouni Maho, you are implying there are roots, so as well as the lexicon, > there would be a list of roots, presumably, and these would add to the number > of rules. > > If there are roots, then presumably each root could appear with each > suffix, (unless there's an additional rule that says that they can't) and > there should be more lexemes than you listed. > > > The question that seems more interesting to me is: could there ever be a > human language with only five lexemes? If there could, why haven't we > found one like that? > > Language is an information bearing code. The number of contrasts helps > determine the amount of information transmitted. If there are fewer > phonemes, then words have to be longer. If there are more phonemes, the > same information can be transmitted in shorter words. More > grammatical syntax allows for the same information to be coded in > shorter sentences, in terms of word count. Less grammatical > morphology requires more words per sentence. It all evens out, based > on a very simple calculation. Languages of the world deploy the same > basic phonological inventory inherent in our physiology in different > ways in order to transmit about the same amount of information per > time unit. Every language codes for a certain amount of redundancy in order to > deal with noise in the signal. > > Redundancy could be viewed as adding extra rules that don't directly help > with transmission of information. Is that what you are getting at, Rob? > > Best, > > --Aya Katz From vch468d at tninet.se Sun Jul 6 13:04:34 2008 From: vch468d at tninet.se (Jouni Maho) Date: Sun, 6 Jul 2008 15:04:34 +0200 Subject: Rules vs. Lists Message-ID: Rob Freeman wrote: > > We have to be careful what we regard as "examples". As I > said to Jouni phonemes should be thought of as classes not > examples. Similarly "roots", "lexemes", "morphemes" etc. Well, you have to convince me why example-class is an important distinction to make here. I'm sorry if I seem to be running off on a tangent, but I understood the more-rules-less-examples thing as being about lists of items and the rules that apply to them, but perhaps you're actually talking about something else. Still, let me try to retract a bit, just to try to clarify to (for?) myself. When a language user extracts rules (generalisations) from a series of utterances, that assumes that the rule-extractor has analysed the utterances into an abstract list, so that each uttered "Hi!" is analysed as belonging to a set. Each generalisation (phones to a phoneme, many uttered "Hi!" to one abtract 'Hi!') is a rule, of course, but the abstract entities /a/ and "Hi!" themselves become units of a list on which other rules can apply. Hence also the rules themselves become members of lists. (Perhaps my earlier hypothetical example was not 5 items plus 6 rules, but rather 11 items including 6 rules.) Anyway, is "example" equal to the member of an abstract list ("Hi!" counts as one) or each uttered word ("Hi!" counts as many)? As a language user I make generalisations on various levels of abstraction. I can establish lexemes and phonemes from utterances, but I can also generalise syntactic ad morphological rules that apply to only certain classes of words or phonemes (which requires that I have made the example>class analysis first). So, does the distinction example-class really matter here? --- jouni maho From amnfn at well.com Sun Jul 6 16:20:29 2008 From: amnfn at well.com (A. Katz) Date: Sun, 6 Jul 2008 09:20:29 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807052320s49d5772bsf85d07f0a3624f6d@mail.gmail.com> Message-ID: Rob Freeman wrote: >What we have at root are a number of utterances with a certain amount >of variation between them. You need that variation to carry a signal, >as you say. But the number of lexemes you allocate will depend on >where you slice that variation. Which slice of variation you allocate >to lexemes, which to phonemes etc. To an extent it will be arbitrary. >The distinction between a phoneme and a lexeme is not so clear in, for >instance, tone languages. Why do you think the distinction between a phoneme and a lexeme is not so clear in tone languages? Isn't tone just one attribute out many that a vowel can have? >That said, if we decide the slice of variation we allocate to lexemes >corresponds broadly to conventionalized meanings, it seems reasonable >to me that there will be a fairly consistent number across cultures >(perhaps tending a bit higher in highly conservative cultures.) You >could certainly get by with only five. Computers use only two. But I >doubt there will ever be a culture sufficiently innovative that it >will want to think of new things to say quite that often! The fact that we can productively encode the information available in any utterance of any language using a binary code as in a computer does not mean that there are any human languages that actually employ a binary code of contrasts. The fact that we favor the decimal system over binary in our numerical calculations has something to do with the limitations of our working memory. For the same reason, there are no languages with only two phonemes, (much less just two morphemes or two lexemes or two clauses). Human language doesn't work that way in real time due to processing constraints. Best, --Aya From lists at chaoticlanguage.com Mon Jul 7 04:55:05 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Mon, 7 Jul 2008 12:55:05 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: Aya, You seem to have taken too seriously my little joke that no society would be sufficiently innovative to want only two conventionalized forms of speech. I'm sure there are all kinds of cognitive restraints which favor shorter sequences of more symbols. I always remember the Japanese colleague who said despairingly of English "The letters are easy, but there are just so many of them all together" :-) It would be a fun conversation to talk about what cognitive restraint fixed our common arithmetic base at exactly the most common number of fingers. Equally I would like to see how you allocate tone to a vowel in Chinese without first knowing the word. But I fear that all such argument about one systematization or another might take us away from the point I am trying to make here. The point I want to focus on is that, whatever your classification of elements, it may be possible to find more rules over combinations than there are combinations in the first place. -Rob On Mon, Jul 7, 2008 at 12:20 AM, A. Katz wrote: > Rob Freeman wrote: > >>What we have at root are a number of utterances with a certain amount >>of variation between them. You need that variation to carry a signal, >>as you say. But the number of lexemes you allocate will depend on >>where you slice that variation. Which slice of variation you allocate >>to lexemes, which to phonemes etc. To an extent it will be arbitrary. >>The distinction between a phoneme and a lexeme is not so clear in, for >>instance, tone languages. > > Why do you think the distinction between a phoneme and a lexeme is not so > clear in tone languages? Isn't tone just one attribute out many that a > vowel can have? > > >>That said, if we decide the slice of variation we allocate to lexemes >>corresponds broadly to conventionalized meanings, it seems reasonable >>to me that there will be a fairly consistent number across cultures >>(perhaps tending a bit higher in highly conservative cultures.) You >>could certainly get by with only five. Computers use only two. But I >>doubt there will ever be a culture sufficiently innovative that it >>will want to think of new things to say quite that often! > > The fact that we can productively encode the information available in any > utterance of any language using a binary code as in a computer does not mean > that there are any human languages that actually employ a binary code of > contrasts. > > The fact that we favor the decimal system over binary in our numerical > calculations has something to do with the limitations of our working > memory. For the same reason, there are no languages with only two > phonemes, (much less just two morphemes or two lexemes or two clauses). > > Human language doesn't work that way in real time due to processing > constraints. > > Best, > > --Aya > From lists at chaoticlanguage.com Mon Jul 7 04:58:30 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Mon, 7 Jul 2008 12:58:30 +0800 Subject: Rules vs. Lists In-Reply-To: <4870C2E2.20803@tninet.se> Message-ID: Maybe you are right Jouni. Maybe the distinction example-class does not matter. Perhaps a better way to make my point is to say we need to focus on ways of breaking things down, rather than ways of putting things together. We can take it to another level and accept that "utterances" too need to be thought of as classes if you like. Accept we need to segment them from some global speech act. Or for convenience we can assume a level of phonemic or lexical abstraction and think only about how rules can be abstracted from sequences of those. What is important for my point is that we think about ways combinations of elements can be abstracted from wholes at any level. I wasn't sure if what you wanted to do was define X phonemes and then argue there can be >> X rules governing their combinations. Trivially that is so. The problem is we can't find any actual set of rules which completely explain all the combinations of phonemes we get. I want to turn that around. The interesting question for me is given Y combinations of phonemes: abcd.., aabc.., acdb.... how many generalizations/rules can we find between those combinations? More than Y? -Rob On Sun, Jul 6, 2008 at 9:04 PM, Jouni Maho wrote: > Rob Freeman wrote: >> >> We have to be careful what we regard as "examples". As I >> said to Jouni phonemes should be thought of as classes not >> examples. Similarly "roots", "lexemes", "morphemes" etc. > > Well, you have to convince me why example-class is an important distinction > to make here. > > I'm sorry if I seem to be running off on a tangent, but I understood the > more-rules-less-examples thing as being about lists of items and the rules > that apply to them, but perhaps you're actually talking about something > else. > > Still, let me try to retract a bit, just to try to clarify to (for?) myself. > > When a language user extracts rules (generalisations) from a series of > utterances, that assumes that the rule-extractor has analysed the utterances > into an abstract list, so that each uttered "Hi!" is analysed as belonging > to a set. > > Each generalisation (phones to a phoneme, many uttered "Hi!" to one abtract > 'Hi!') is a rule, of course, but the abstract entities /a/ and "Hi!" > themselves become units of a list on which other rules can apply. Hence also > the rules themselves become members of lists. (Perhaps my earlier > hypothetical example was not 5 items plus 6 rules, but rather 11 items > including 6 rules.) > > Anyway, is "example" equal to the member of an abstract list ("Hi!" counts > as one) or each uttered word ("Hi!" counts as many)? As a language user I > make generalisations on various levels of abstraction. I can establish > lexemes and phonemes from utterances, but I can also generalise syntactic ad > morphological rules that apply to only certain classes of words or phonemes > (which requires that I have made the example>class analysis first). So, does > the distinction example-class really matter here? > > --- > jouni maho From Vyv.Evans at brighton.ac.uk Mon Jul 7 09:42:17 2008 From: Vyv.Evans at brighton.ac.uk (Vyvyan Evans) Date: Mon, 7 Jul 2008 10:42:17 +0100 Subject: 'Language & Cognition': New journal website now live Message-ID: Dear Colleagues. We are delighted to announce that the website for the new journal: 'Language & Cognition' is now live. Please check out the website for full details on the journal: www.languageandcognition.net/journal/ The journal is provided to all members of the UK-Cognitive Linguistics Association. Membership of the Association is free for 2009 and available at a 50% reduction for 2010. Membership application details will be available soon on the journal website. All are welcome to join the Association regardless of nationality or geographical location. The table of contents for 2009 and 2010 is detailed below: Volume 1 (2009) Issue 1 How infants build a semantic system. Kim Plunkett (University of Oxford) The cognitive poetics of literary resonance. Peter Stockwell (University of Nottingham) Action in cognition: The case of language. Lawrence J. Taylor and Rolf A. Zwaan (Erasmus University of Rotterdam) Prototype constructions in early language development. Paul Ibbotson (University of Manchester) and Michael Tomasello (MPI for Evolutionary Anthropology, Leipzig) The Enactment of Language: 20 Years of Interactions Between Linguistic and Motor Processes. Michael Spivey (University of California, Merced) and Sarah Anderson (Cornell University) Episodic affordances contribute to language comprehension. Arthur M. Glenberg (Arizona State Universtiy), Raymond Becker (Wilfrid Laurier University), Susann Klötzer, Lidia Kolanko, Silvana Müller (Dresden University of Technology), and Mike Rinck (Radboud University Nijmegen) Reviews: Daniel D. Hutto. 2008. Folk Psychological Narratives: The Sociocultural Basis of Understanding Reasons (MIT Press). Reviewed by Chris Sinha Aniruddh Patel. 2008. Music, Language, and the Brain (Oxford Univeristy Press). Reviewed by Daniel Casasanto Issue 2 Pronunciation reflects syntactic probabilities: Evidence from spontaneous speech. Harry Tily (Stanford University), Susanne Gahl (University of California, Berkeley), Inbal Arnon, Anubha Kothari, Neal Snider and Joan Bresnan (Stanford University) Causal agents in English, Korean and Chinese: The role of internal and external causation. Phillip Wolff, Ga-hyun Jeon, and Yu Li (Emory University) Ontology as correlations: How language and perception interact to create knowledge. Linda Smith (Indiana University) and Eliana Colunga (University of Colorado at Boulder) Toward a theory of word meaning. Gabriella Vigliocco, Lotte Meteyard and Mark Andrews (University College London) Spatial language in the brain. Mikkel Wallentin (University of Aarhus) The neural basis of semantic memory: Insights from neuroimaging. Uta Noppeney (MPI for Biological Cybernetics, Tuebingen) Reviews: Ronald Langacker. 2008. Cognitive Grammar: A basic introduction. (Oxford University Press). Reviewed by Vyvyan Evans Giacomo Rizzolatti and Corrado Sinigagalia. Mirrors in the brain: How our minds share actions and emotions. 2008. (Oxford University Press). Reviewed by David Kemmerer. Volume 2 (2010) Issue 1 Adaptive cognition without massive modularity: The context-sensitivity of language use. Raymond W. Gibbs (University of California, Santa Cruz) and Guy Van Orden (University of Cincinnati) Spatial foundations of the conceptual system. Jean Mandler (University California, San Diego and University College London) Metaphor: Old words, new concepts, imagined worlds. Robyn Carston (University College London) Language Development and Linguistic Relativity. John A. Lucy (University of Chicago) Construction Learning. Adele Goldberg (Princeton University) Space and Language: some neural considerations. Anjan Chatterjee (University of Pennsylvania) Issue 2 What can language tell us about psychotic thought? Gina Kuperberg (Tufts University) Abstract motion is no longer abstract. Teenie Matlock (University California, Merced) When gesture does and doesn't promote learning. Susan Goldin-Meadow (University of Chicago) Discourse Space Theory. Paul Chilton (Lancaster University) Relational language supports relational cognition. Dedre Gentner (Northwestern University) Talking about quantities in space. Kenny Coventry (Northumbria University). Sincerely, Vyv Evans President, UK-CLA --------------------------------------------------- Vyv Evans Professor of Cognitive Linguistics www.vyvevans.net From amnfn at well.com Mon Jul 7 13:05:42 2008 From: amnfn at well.com (A. Katz) Date: Mon, 7 Jul 2008 06:05:42 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807062155h26a2aa66lb0317ae2698409e4@mail.gmail.com> Message-ID: Okay. Your point is that the linguistic pie can be sliced many, many different ways. I don't disagree, but I have another point that I have been trying to make: there is no difference between one method of slicing it or another, when we are studying how a language works. If it all adds up correctly, all the different ways are equivalent, and there's not any reason to prefer one method over another, unless we have adopted a particular constraint, such as economy of rules or mathematical elegance. Now, a particular speaker may adopt one way, and another speaker may adopt a second. A third speaker may adopt a third. There may be as many different ways of parsing a language as speakers, although that is doubtful and perfectly open to scientific investigation. It's okay to study the details of how speakers process language. It is also okay to find ways to describe language apart from speakers. What is not okay is to confuse what any given speaker does with how the language works. Best, --Aya On Mon, 7 Jul 2008, Rob Freeman wrote: > Aya, > > You seem to have taken too seriously my little joke that no society > would be sufficiently innovative to want only two conventionalized > forms of speech. I'm sure there are all kinds of cognitive restraints > which favor shorter sequences of more symbols. I always remember the > Japanese colleague who said despairingly of English "The letters are > easy, but there are just so many of them all together" :-) > > It would be a fun conversation to talk about what cognitive restraint > fixed our common arithmetic base at exactly the most common number of > fingers. Equally I would like to see how you allocate tone to a vowel > in Chinese without first knowing the word. But I fear that all such > argument about one systematization or another might take us away from > the point I am trying to make here. The point I want to focus on is > that, whatever your classification of elements, it may be possible to > find more rules over combinations than there are combinations in the > first place. > > -Rob > > On Mon, Jul 7, 2008 at 12:20 AM, A. Katz wrote: > > Rob Freeman wrote: > > > >>What we have at root are a number of utterances with a certain amount > >>of variation between them. You need that variation to carry a signal, > >>as you say. But the number of lexemes you allocate will depend on > >>where you slice that variation. Which slice of variation you allocate > >>to lexemes, which to phonemes etc. To an extent it will be arbitrary. > >>The distinction between a phoneme and a lexeme is not so clear in, for > >>instance, tone languages. > > > > Why do you think the distinction between a phoneme and a lexeme is not so > > clear in tone languages? Isn't tone just one attribute out many that a > > vowel can have? > > > > > >>That said, if we decide the slice of variation we allocate to lexemes > >>corresponds broadly to conventionalized meanings, it seems reasonable > >>to me that there will be a fairly consistent number across cultures > >>(perhaps tending a bit higher in highly conservative cultures.) You > >>could certainly get by with only five. Computers use only two. But I > >>doubt there will ever be a culture sufficiently innovative that it > >>will want to think of new things to say quite that often! > > > > The fact that we can productively encode the information available in any > > utterance of any language using a binary code as in a computer does not mean > > that there are any human languages that actually employ a binary code of > > contrasts. > > > > The fact that we favor the decimal system over binary in our numerical > > calculations has something to do with the limitations of our working > > memory. For the same reason, there are no languages with only two > > phonemes, (much less just two morphemes or two lexemes or two clauses). > > > > Human language doesn't work that way in real time due to processing > > constraints. > > > > Best, > > > > --Aya > > > > From amnfn at well.com Mon Jul 7 13:19:20 2008 From: amnfn at well.com (A. Katz) Date: Mon, 7 Jul 2008 06:19:20 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807062155h26a2aa66lb0317ae2698409e4@mail.gmail.com> Message-ID: Rob Freeman wrote: >It would be a fun conversation to talk about what cognitive restraint >fixed our common arithmetic base at exactly the most common number of >fingers. It may be that we could do arithmetic with base nine or base eleven just as well, and we chose the exact number of our fingers to make up the decimal system. But the fact that we didn't choose base two isn't on account of the number of fingers we have. We have two hands, after all, and we could have used them to count in base 2. >Equally I would like to see how you allocate tone to a vowel >in Chinese without first knowing the word. But I fear that all such The fact that the tone of a word in Chinese is part of its lexicon entry does not in any way take away from the phonemic status of tone. You might as well say that you can't allocate consonants to the onset of a syllable in English without knowing which word it is. Of course, you can't. Monomorphemic words are made up of a list of phonemes. (Or, if you like, morphemes are made of phonemes.) The list is different for each monomorphemic word. That doesn't take away the phonemic status of the units in the list. Right? --Aya From JVanness at iie.org Mon Jul 7 17:40:11 2008 From: JVanness at iie.org (Vanness, Justin) Date: Mon, 7 Jul 2008 13:40:11 -0400 Subject: Fulbright Awards in TEFL 2009-10 Message-ID: Good news - the Fulbright Scholar Program is featuring Teaching English as a Foreign Language (TEFL) awards in nearly every world region for the 2009-10 competition that is currently underway. Consider a Fulbright grant for lecturing, researching, or both. In Latin America, grants are available in Panama, Venezuela, Mexico, Nicaragua, Guatemala, Honduras, and Chile. In Africa, there are grants in Cote d'Ivorie and Mauritius. And in Asia, there are grants in Indonesia, Kyrgyz Republic, Turkmenistan, Uzbekistan, Taiwan, and Mongolia. While each grant is different, a brief sampling of topics of interest includes methodology, communications techniques, textbook analysis, language learning software, and English for professional purposes. While language and teaching experience preferences vary, English is sufficient in most cases. Applications for 2009-2010 are due by August 1, 2008. US citizenship and a Ph.D. or its equivalent terminal degree are required. Apply online at: http://www.cies.org/us_scholars/us_awards/ Contact Joseph Graff (jgraff at cies.iie.org; 202.686.6239) or Carol Robles (crobles at cies.iie.org; 202.686.6238) regarding Latin American awards; Debra Egan (degan at cies.iie.org; 202.686.6230) regarding African awards; and Michael Zdanovich (mzdanovich at cies.iie.org; 202.686.7873) regarding Asian awards. From lists at chaoticlanguage.com Tue Jul 8 00:36:00 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Tue, 8 Jul 2008 08:36:00 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: My point is partially that "the linguistic pie can be sliced many ways". Thanks for acknowledging that. But there is more. Something happens when there can be more ways of slicing than there are things to slice. There is another aspect. The idea of more rules than examples seems only surprising at first, but it is important. The thing is, if there can be more rules than examples, you can be never done slicing. That's because for every list of "slices" you make there can be another, longer, list to be made. Each list of rules you make either constitutes, or produces, an even longer list, which implies a longer list etc. Such a system operates on itself to constantly produce complexity. It's just a quirk of the system, but very nice, because it predicts change, drift, etc, and also gives us considerable scope for complexity, "new ideas", even "free will" if you like. (The system is less specified than one which can be abstracted with a smaller number of rules, it is unstable, even random at one level, liable to go off at tangents and develop in completely different ways, produce different languages etc.) So it is not quite that "there is no difference between one method of slicing it or another." Because no set of slices is complete. Each set, list, of "slices" always implies another, larger, set. More than one larger set actually, should we choose to look for them. I think this is right. It seems to be the case. Worth investigating, anyway. Whether there is no end to the list of grammars which linguists can derive may be questioned by some, but it seems sure there is no end to things that can be said. I'm not sure if the premise of a set without end is open to scientific investigation. It is hard to falsify. Fortunately it is easy to go to the other end of the problem and explore whether you can find more and more rules from a given set of examples. It seems quite a small thing, that there might be rules than examples, but it has consequences that imply a qualitative difference in how language works which go beyond what any given speaker does. I think we should look at the possibility carefully. -Rob P.S. We can look at the significance of tone for the analycity of phonemes if you like. It may be relevant to the idea of more rules than examples. But I would like to hear other people's opinions first. In particular I'd like to hear how this relates in standard theory to that other problem of phonemes being modified in context, voicing assimilation in Russian obstruent clusters was it, the classic example of this? On Mon, Jul 7, 2008 at 9:05 PM, A. Katz wrote: > Okay. Your point is that the linguistic pie can be sliced many, many > different ways. I don't disagree, but I have another point that I have > been trying to make: there is no difference between one method of > slicing it or another, when we are studying how a language works. If it > all adds up correctly, all the different ways are equivalent, and there's > not any reason to prefer one method over another, unless we have adopted a > particular constraint, such as economy of rules or mathematical elegance. > > > Now, a particular speaker may adopt one way, and another speaker may adopt > a second. A third speaker may adopt a third. There may be as many > different ways of parsing a language as speakers, although that is > doubtful and perfectly open to scientific investigation. > > It's okay to study the details of how speakers process language. It is > also okay to find ways to describe language apart from speakers. What is > not okay is to confuse what any given speaker does with how the language > works. > > > Best, > > --Aya From amnfn at well.com Tue Jul 8 15:35:04 2008 From: amnfn at well.com (A. Katz) Date: Tue, 8 Jul 2008 08:35:04 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807071736t2f9d8d18vd9752ea9f7671b94@mail.gmail.com> Message-ID: Okay. So your other point is that the grammar of any language at any given point is somewhat indeterminate, because it just misses resolving itself one way or the other. That's true, of course. Sapir made that point when he spoke of linguistic drift. What I find more interesting, (while acknowledging your point), is that languages don't just drift. They cycle. They keep coming up with the same ways of resolving the indeterminacy, after having seemingly gone in a different direction for a while. The reason I find that interesting is because all the while language appears to be evolving, it's really staying the same more or less. Show me one primitive language! There is none to be found. We have people with primitive material culture in isolated pockets of the world, but NO primitive languages. Best, --Aya On Tue, 8 Jul 2008, Rob Freeman wrote: > My point is partially that "the linguistic pie can be sliced many > ways". Thanks for acknowledging that. > > But there is more. Something happens when there can be more ways of > slicing than there are things to slice. > > There is another aspect. The idea of more rules than examples seems > only surprising at first, but it is important. The thing is, if there > can be more rules than examples, you can be never done slicing. That's > because for every list of "slices" you make there can be another, > longer, list to be made. Each list of rules you make either > constitutes, or produces, an even longer list, which implies a longer > list etc. > > Such a system operates on itself to constantly produce complexity. > > It's just a quirk of the system, but very nice, because it predicts > change, drift, etc, and also gives us considerable scope for > complexity, "new ideas", even "free will" if you like. (The system is > less specified than one which can be abstracted with a smaller number > of rules, it is unstable, even random at one level, liable to go off > at tangents and develop in completely different ways, produce > different languages etc.) > > So it is not quite that "there is no difference between one method of > slicing it or another." Because no set of slices is complete. Each > set, list, of "slices" always implies another, larger, set. More than > one larger set actually, should we choose to look for them. > > I think this is right. It seems to be the case. Worth investigating, anyway. > > Whether there is no end to the list of grammars which linguists can > derive may be questioned by some, but it seems sure there is no end to > things that can be said. I'm not sure if the premise of a set without > end is open to scientific investigation. It is hard to falsify. > Fortunately it is easy to go to the other end of the problem and > explore whether you can find more and more rules from a given set of > examples. > > It seems quite a small thing, that there might be rules than examples, > but it has consequences that imply a qualitative difference in how > language works which go beyond what any given speaker does. > > I think we should look at the possibility carefully. > > -Rob > > P.S. We can look at the significance of tone for the analycity of > phonemes if you like. It may be relevant to the idea of more rules > than examples. But I would like to hear other people's opinions first. > In particular I'd like to hear how this relates in standard theory to > that other problem of phonemes being modified in context, voicing > assimilation in Russian obstruent clusters was it, the classic example > of this? > > On Mon, Jul 7, 2008 at 9:05 PM, A. Katz wrote: > > Okay. Your point is that the linguistic pie can be sliced many, many > > different ways. I don't disagree, but I have another point that I have > > been trying to make: there is no difference between one method of > > slicing it or another, when we are studying how a language works. If it > > all adds up correctly, all the different ways are equivalent, and there's > > not any reason to prefer one method over another, unless we have adopted a > > particular constraint, such as economy of rules or mathematical elegance. > > > > > > Now, a particular speaker may adopt one way, and another speaker may adopt > > a second. A third speaker may adopt a third. There may be as many > > different ways of parsing a language as speakers, although that is > > doubtful and perfectly open to scientific investigation. > > > > It's okay to study the details of how speakers process language. It is > > also okay to find ways to describe language apart from speakers. What is > > not okay is to confuse what any given speaker does with how the language > > works. > > > > > > Best, > > > > --Aya > > From lists at chaoticlanguage.com Wed Jul 9 07:49:34 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Wed, 9 Jul 2008 15:49:34 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: You keep wanting to change the subject, Aya. It is good that you accept "the grammar of any language at any given point is somewhat indeterminate." I wonder how many people would agree. But note, I'm not only suggesting the existence of indeterminacy in human language, I'm suggesting a model to explain it. This is something which has historically fallen between complete explanation in terms of rules and lists/usage. In fact nearly everything about language has fallen between complete description in terms of rules and lists/usage. I suggest this is because we have not considered the possibility of a list which implies more rules than it has elements. If you want to address that hypothesis, or its consequences, I would welcome your feedback. It seems you want to talk about cognitive or social constants in language. We can start another thread to talk about cognitive or social constants in human language if you like. Though really, I think many people have done quite a thorough job on that aspect of language already. Perhaps you have a new perspective. By all means start a new thread and present it. -Rob On Tue, Jul 8, 2008 at 11:35 PM, A. Katz wrote: > Okay. So your other point is that the grammar of any language at any given > point is somewhat indeterminate, because it just misses resolving itself > one way or the other. That's true, of course. Sapir made that point when > he spoke of linguistic drift. > > What I find more interesting, (while acknowledging your point), is that > languages don't just drift. They cycle. They keep coming up with the same > ways of resolving the indeterminacy, after having seemingly gone in a > different direction for a while. > > The reason I find that interesting is because all the while language > appears to be evolving, it's really staying the same more or less. > > Show me one primitive language! There is none to be found. We have people > with primitive material culture in isolated pockets of the world, but NO > primitive languages. > > Best, > > --Aya From amnfn at well.com Wed Jul 9 14:02:38 2008 From: amnfn at well.com (A. Katz) Date: Wed, 9 Jul 2008 07:02:38 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807090049t6a00ce9dp3fd15e64bf2c476a@mail.gmail.com> Message-ID: Okay, Rob. So you would like to stick with your topic. Do you have a formalism to deal with the more-rules-than-examples scenario? How do we count the examples and the rules? What are the more specific implications to any particular language? Have you already (or are you in the process of) applying this outlook to a single natural language in order to harvest all the examples and all the rules? If you have written any papers on this topic, would you care to share them with us? I am currently in the process of writing a book entitled CYCLES IN LANGUAGE. The topic is language change/evolution, and the main observation is that as much as language changes, it stays remarkably the same. In some of the beginning chapters, I and my co-author June Sun, discuss different formalisms for accounting for grammar, and we specifically discuss the concept of functional equivalence. We would be happy to include your outlook on more-examples-than-rules, if there are papers to cite. Best, --Aya On Wed, 9 Jul 2008, Rob Freeman wrote: > You keep wanting to change the subject, Aya. > > It is good that you accept "the grammar of any language at any given > point is somewhat indeterminate." > > I wonder how many people would agree. > > But note, I'm not only suggesting the existence of indeterminacy in > human language, I'm suggesting a model to explain it. This is > something which has historically fallen between complete explanation > in terms of rules and lists/usage. In fact nearly everything about > language has fallen between complete description in terms of rules and > lists/usage. I suggest this is because we have not considered the > possibility of a list which implies more rules than it has elements. > > If you want to address that hypothesis, or its consequences, I would > welcome your feedback. > > It seems you want to talk about cognitive or social constants in language. > > We can start another thread to talk about cognitive or social > constants in human language if you like. Though really, I think many > people have done quite a thorough job on that aspect of language > already. Perhaps you have a new perspective. By all means start a new > thread and present it. > > -Rob > > On Tue, Jul 8, 2008 at 11:35 PM, A. Katz wrote: > > Okay. So your other point is that the grammar of any language at any given > > point is somewhat indeterminate, because it just misses resolving itself > > one way or the other. That's true, of course. Sapir made that point when > > he spoke of linguistic drift. > > > > What I find more interesting, (while acknowledging your point), is that > > languages don't just drift. They cycle. They keep coming up with the same > > ways of resolving the indeterminacy, after having seemingly gone in a > > different direction for a while. > > > > The reason I find that interesting is because all the while language > > appears to be evolving, it's really staying the same more or less. > > > > Show me one primitive language! There is none to be found. We have people > > with primitive material culture in isolated pockets of the world, but NO > > primitive languages. > > > > Best, > > > > --Aya > > From lists at chaoticlanguage.com Thu Jul 10 02:34:38 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Thu, 10 Jul 2008 10:34:38 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: Aya, Thanks for asking. The basic complexity ideas need not be limited to any formalism, but I have a formalism. It is closest conceptually to grammatical induction by distributional analysis. The main difference is that while classical distributional analysis seeks to abstract classes to fit an entire corpus, I only attempt to fit one sentence at a time. It turns out the different orders of associating words to fit a new sentence give very different results. A parse structure falls naturally out of the process of selecting the best order. I used this principle to implement a kind of parser. There is a Web-based demo. If you have server space I could set it up for you. Failing that you can see some examples of the kind of output you get at http://www.chaoticlanguage.com/flat_site/index.html. Currently it has only been implemented for English, Chinese, and Danish. Because I think this power to combine in different ways only becomes crucial above the "word" level (defining that level by contrast), I generally "list" only associations of words. Though I have done some experiments for Chinese on recording associations at the character level (at which point the "parser" becomes a word segmentation algorithm.) So generally "examples" in my implementation are words and lists of their associations. It is impossible to count the number of "rules" or different orderings you might project out. In theory the number is very high, as David Tuggy noted. The basic insights are quite general to any language, though for morphologically rich languages an implementation based on traditional word boundaries would become less useful. There is no reason why you could not search for structure in terms of groups of letters, but the advantage of searching for patterns anew each time would decrease as the morphology/phonotactics became less productive. Chinese is a particularly interesting case to study because you can go beneath "word" boundaries and find productive morphological structure while still dealing with a relatively small number of "letters". Listing "all the examples", at any time, in my implementation corresponds to listing a corpus. I would never attempt to "harvest ... all the rules". It would correspond broadly in this model to listing all the sentences you could possibly say in a language. I don't work in academia so there has been little incentive to publish, but I did present a paper at a North American ACL some years ago: Freeman R. J., Example-based Complexity--Syntax and Semantics as the Production of Ad-hoc Arrangements of Examples, Proceedings of the ANLP/NAACL 2000 Workshop on Syntactic and Semantic Complexity in Natural Language Processing Systems, pp. 47-50. (http://acl.ldc.upenn.edu/W/W00/W00-0108.pdf) This paper was deliberately vague on the details of the technical implementation, but it presented the core complexity ideas. Your book on "Cycles in Language" sounds interesting. How many formalisms have you counted? -Rob On Wed, Jul 9, 2008 at 10:02 PM, A. Katz wrote: > Okay, Rob. So you would like to stick with your topic. > > Do you have a formalism to deal with the more-rules-than-examples > scenario? > > How do we count the examples and the rules? What are the more specific > implications to any particular language? Have you already (or are you in > the process of) applying this outlook to a single natural language in > order to harvest all the examples and all the rules? > > If you have written any papers on this topic, would you care to share them > with us? > > I am currently in the process of writing a book entitled CYCLES IN > LANGUAGE. The topic is language change/evolution, and the main observation > is that as much as language changes, it stays remarkably the same. > > In some of the beginning chapters, I and my co-author June Sun, > discuss different formalisms for accounting for grammar, and we > specifically discuss the concept of functional equivalence. We would be > happy to include your outlook on more-examples-than-rules, if there are > papers to cite. > > > Best, > > --Aya From paul at benjamins.com Thu Jul 10 17:11:47 2008 From: paul at benjamins.com (Paul Peranteau) Date: Thu, 10 Jul 2008 13:11:47 -0400 Subject: New Benjamins book - Adolphs: Corpus and Context Message-ID: Corpus and Context Investigating pragmatic functions in spoken discourse Svenja Adolphs University of Nottingham Studies in Corpus Linguistics 30 2008. xi, 151 pp. Hardbound 978 90 272 2304 3 / EUR 99.00 / USD 149.00 Corpus and Context explores the relationship between corpus linguistics and pragmatics by discussing possible frameworks for analysing utterance function on the basis of spoken corpora. The book articulates the challenges and opportunities associated with a change of focus in corpus research, from lexical to functional units, from concordance lines to extended stretches of discourse, and from the purely textual to multi-modal analysis of spoken corpus data. Drawing on a number of spoken corpora including the five million word Cambridge and Nottingham Corpus of Discourse in English (CANCODE, funded by CUP (c)), a specific speech act function is being explored using different approaches and different levels of analysis. This involves a close analysis of contextual variables in relation to lexico-grammatical and discoursal patterns that emerge from the corpus data, as well as a wider discussion of the role of context in spoken corpus research. -------------------------------------------------------------------------------- Table of contents Acknowledgements ix–x Tables and figures xi Chapter 1. Introduction 1–17 Chapter 2. Spoken discourse and corpus analysis 19–42 Chapter 3. Pragmatic functions, conventionalised speech acts expressions and corpus evidence 43–72 Chapter 4. Pragmatic functions in context 73–88 Chapter 5. Exploring pragmatic functions in discourse: The speech act episode 89–116 Chapter 6. Pragmatic functions beyond the text 117–130 Chapter 7. Concluding remarks 131–136 Appendix: Transcription conventions for the CANCODE data used in this book 137–138 References 139–148 Index 149–151 Paul Peranteau (paul at benjamins.com) General Manager John Benjamins Publishing Company 763 N. 24th St. Philadelphia PA 19130 Phone: 215 769-3444 Fax: 215 769-3446 John Benjamins Publishing Co. website: http://www.benjamins.com From paul at benjamins.com Thu Jul 10 17:13:49 2008 From: paul at benjamins.com (Paul Peranteau) Date: Thu, 10 Jul 2008 13:13:49 -0400 Subject: New Benjamins book - Kurzon & Adler: Adpositions Message-ID: Adpositions Pragmatic, semantic and syntactic perspectives Edited by Dennis Kurzon and Silvia Adler University of Haifa Typological Studies in Language 74 2008. viii, 307 pp. Hardbound 978 90 272 2986 1 / EUR 110.00 / USD 165.00 This book is a collection of articles which deal with adpositions in a variety of languages and from a number of perspectives. Not only does the book cover what is traditionally treated in studies from a European and Semitic orientation – prepositions, but it presents studies on postpositions, too. The main languages dealt with in the collection are English, French and Hebrew, but there are articles devoted to other languages including Korean, Turkic languages, Armenian, Russian and Ukrainian. Adpositions are treated by some authors from a semantic perspective, by others as syntactic units, and a third group of authors distinguishes adpositions from the point of view of their pragmatic function. This work is of interest to students and researchers in theoretical and applied linguistics, as well as to those who have a special interest in any of the languages treated. -------------------------------------------------------------------------------- Table of contents Introduction Dennis Kurzon and Silvia Adler List of contributors French compound prepositions, prepositional locutions and prepositional phrases in the scope of the absolute use Silvia Adler "Over the hills and far away" or "far away over the hills": English place adverb phrases and place prepositional phrases in tandem? David J. Allerton Structures with omitted prepositions: Semantic and pragmatic motivations Esther Borochovsky Bar-Aba A closer look at the Hebrew Construct and free locative PPs: The analysis of mi-locatives Irena Botwinik-Rotem Pragmatics of prepositions: A study of the French connectives pour le coup and du coup Pierre Cadiot and Franck Lebas Particles and postpositions in Korean Injoo Choi-Jonin French prepositions à and de in infinitival complements: A pragma-semantic analysis Lidia Fraczak Prepositional wars: When ideology defines preposition Julia G. Krivoruchko "Ago" and its grammatical status in English and in other languages Dennis Kurzon Case marking of Turkic adpositional objects Alan Libert The logic of addition: Changes in the meaning of the Hebrew preposition im ("with"). Tamar Sovran A monosemic view of polysemic prepositions Yishai Tobin The development of Classical Armenian prepositions and its implications for universals of language change Christopher Wilhelm Paul Peranteau (paul at benjamins.com) General Manager John Benjamins Publishing Company 763 N. 24th St. Philadelphia PA 19130 Phone: 215 769-3444 Fax: 215 769-3446 John Benjamins Publishing Co. website: http://www.benjamins.com From paul at benjamins.com Thu Jul 10 17:16:16 2008 From: paul at benjamins.com (Paul Peranteau) Date: Thu, 10 Jul 2008 13:16:16 -0400 Subject: New Benjamins book - Stolz et al.: Split Possession Message-ID: Split Possession An areal-linguistic study of the alienability correlation and related phenomena in the languages of Europe Thomas Stolz, Sonja Kettler, Cornelia Stroh and Aina Urdze University of Bremen Studies in Language Companion Series 101 2008. x, 546 pp. Hardbound 978 90 272 0568 1 / EUR 130.00 / USD 195.00 This book is a functional-typological study of possession splits in European languages. It shows that genetically and structurally diverse languages such as Icelandic, Welsh, and Maltese display possessive systems which are sensitive to semantically based distinctions reminiscent of the alienability correlation. These distinctions are grammatically relevant in many European languages because they require dedicated constructions. What makes these split possessive systems interesting for the linguist is the interaction of semantic criteria with pragmatics and syntax. Neutralisation of distinctions occurs under focus. The same happens if one of the constituents of a possessive construction is syntactically heavy. These effects can be observed in the majority of the 50 sample languages. Possessive splits are strong in those languages which are outside the Standard Average European group. The bulk of the European languages do not behave much differently from those non-European languages for which possession splits are reported. The book reveals interesting new facts about European languages and possession to typologists, universals researchers, and areal linguists. -------------------------------------------------------------------------------- Table of contents Preface vii–viii List of abbreviations ix–x Part A: What needs to be known beforehand Chapter 1. Introduction 3–9 Chapter 2. Prerequisites 11–28 Chapter 3. Split possession 29–40 Part B: Tour d'Europe Chapter 4. Grammatical possession splits 43–315 Chapter 5. Further evidence of possession splits in Europe 317–465 Part C: On European misfits and their commonalities Chapter 6. Results 469–516 Notes 517–519 Sources 521–524 References 525–533 Additional background literature 535–538 Index of languages 539–540 Index of authors 541–544 Index of subjects 545–546 Paul Peranteau (paul at benjamins.com) General Manager John Benjamins Publishing Company 763 N. 24th St. Philadelphia PA 19130 Phone: 215 769-3444 Fax: 215 769-3446 John Benjamins Publishing Co. website: http://www.benjamins.com From egclaw at inet.polyu.edu.hk Fri Jul 11 06:42:41 2008 From: egclaw at inet.polyu.edu.hk (Catherine C Law [ENGL]) Date: Fri, 11 Jul 2008 14:42:41 +0800 Subject: Hong Kong short course on intonation in English (1 - 4 Sep 2008) Message-ID: ======= Workshop ======= The goal of this short course is to introduce you to two computer programs, PRAAT and CORPUS TOOL, and to Intonation in the Grammar of English (which constitutes the center of gravity for the course) by Michael Halliday and William Greaves. Details of the new book Intonation in the Grammar of English can be found under: http://www.equinoxpub.com/books/showbook.asp?bkid=7 PRAAT is an excellent tool which can be used in virtually any computer for very sophisticated phonetic analysis. The instructor will make extensive use of PRAAT to introduce the patterns of English intonation. CORPUS TOOL is a new and more powerful successor to Mick O’Donnell’s SYSTEMIC CODER. The instructor will demonstrate and use only one of the many features in the program: the creation and editing of system networks. It is far easier to do this with CORPUS TOOL than with editing programs such as Word or graphics programs such as Paint. ======= Instructor ======= Bill Greaves is a Senior Scholar at York University in Toronto, where he is a member of the Glendon College English Department and the Graduate Programme in English. In collaboration with Michael Halliday he has been working on Intonation in English for about a decade. During that time he has taught courses in intonation ranging from a few days to six months in a number of countries: Argentina, Australia, Canada, China, Egypt, England, Finland, India, Israel, and Japan. ======== Programme ======== The course includes four lecture hours per day, plus "hands on" work each day in a language laboratory. Participants in the lecture will be encouraged to form "pods" around those with laptops (this has proved to be very effective). Date: 1 – 4 September 2008 (Mon - Thu) Venue: The Hong Kong Polytechnic University, Hunghom, Kowloon, Hong Kong ======== Registration ======== Participants can register online at the website of the workshop. The maximum number of participants is 60 and place is allocated on a first-come-first-served basis. =========== Registration Fee =========== HK$400 (includes a copy of Intonation in the Grammar of English). Payment can be made online at the website of the workshop. ============= Workshop Website ============= http://www.engl.polyu.edu.hk/events/intonworkshop2008 From phonosemantics at earthlink.net Sat Jul 26 15:31:57 2008 From: phonosemantics at earthlink.net (jess tauber) Date: Sat, 26 Jul 2008 10:31:57 -0500 Subject: social obligation in 'definiteness' in differential case marking systems? Message-ID: I'm hoping folks on the list can give me tips on languages they know of which mark social obligation as part of the semantics of definiteness in case marking. I've found such a system hidden within Yahgan. When a particular suffix -nchi is added to a nominal there is a very strong implication of such connections. For instance in -nchikaia, where this suffix is followed by the dative form, other participants, and the action of the verb, conspire to create a benefactive, substitutive sense- as a recognized, trusted agent other who 'owes' such action to the marked entities, acting on their behalf, be they other family members, deities, religious sages, kings etc. The agent gets his/her power from the marked entity, perhaps even the particular marching orders. Forms with dative but without the -nchi- ambivalently imply positive, neutral, or negative outcome for the marked NP, and no such relationship. So, does anyone know of other languages that have something similar? Thanks. Jess Tauber phonosemantics at earthlink.net From comrie at eva.mpg.de Tue Jul 29 08:03:46 2008 From: comrie at eva.mpg.de (Bernard Comrie) Date: Tue, 29 Jul 2008 10:03:46 +0200 Subject: Max Planck Institute for Evolutionary Anthropology: Announcement of Vacancy Message-ID: [From Bernard Comrie ] Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany Announcement of Vacancy The Department of Linguistics at the Max Planck Institute in Leipzig has a vacancy for a Senior Researcher in the area of phonetics. The successful candidate will be expected to develop a research program in phonetics in relation to the department's core areas of language history and prehistory, linguistic typology, and description of little studied and endangered languages, and will also be responsible for the scientific direction of the department's phonetics laboratory. The five-year non-renewable position is available from 01 November 2008; a later starting date may be negotiable. Prerequisites for an application are a PhD and publications in phonetics. The salary is according to the German public service pay scale (TVöD). The Max Planck Society is concerned to employ more disabled people; applications from disabled people are explicitly sought. The Max Planck Society wishes to increase the proportion of women in areas in which they are underrepresented; women are therefore explicitly encouraged to apply. Applicants are requested to send their complete dossier (including curriculum vitae, description of research interests, names and contact details of two referees, and a piece of written work on one of the relevant topics) no later than 30 September 2008 to: Max Planck Institute for Evolutionary Anthropology Personnel Department Prof. Dr. Bernard Comrie Code word: Scientist Dept Linguistics Deutscher Platz 6 D-04103 Leipzig, Germany Please address questions to Bernard Comrie . Information on the institute is available at http://www.eva.mpg.de/. From lists at chaoticlanguage.com Tue Jul 1 00:21:48 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Tue, 1 Jul 2008 08:21:48 +0800 Subject: Rules vs. lists In-Reply-To: <4868D61D.5060608@sil.org> Message-ID: Any number at all? How do you justify that David? How are you generalizing your examples to abstract a rule? -Rob On Mon, Jun 30, 2008 at 8:48 PM, David Tuggy wrote: > Any number that language users come up with, in either case. More-general > and less-general rules, and whether or not there is one, or a set of, global > generalizations to be made. > > --David Tuggy > > Rob Freeman wrote: >> >> Dear All, >> >> I was glad to find the recent rule vs. list discussion. I like to see >> fundamental issues debated. >> >> In line with my own particular interests I hope I can shine a new >> perspective. >> >> Can anyone tell me, if the utterances of natural language are thought >> of primarily as a list, how many ways might the elements of that list >> be generalized to abstract one or other rule? >> >> That is to say, how many rules might a list of elements define in >> principle? Also how many partial rules, if the requirement of global >> generalizability is dropped? >> >> -Rob Freeman From lists at chaoticlanguage.com Tue Jul 1 08:41:28 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Tue, 1 Jul 2008 16:41:28 +0800 Subject: Rules vs. lists In-Reply-To: <48699FD9.2050304@sil.org> Message-ID: David, Nice papers. Thanks! When you said "any number" of rules I thought you were going to argue abstracted rules could be completely arbitrary. Instead I think you are just saying the number can be very, very large. And that I agree with. Most importantly, I think the number of rules which can be abstracted can be larger than the number of examples they are abstracted from. If we think about this I believe it is an important result. The storage of each extra example need not be seen as a cost. For each new example we store, we can get an even greater number of rules/generalizations for the system. (Each new example can be related to multiple others, to get multiple new "rules".) For N examples, we can have >> N rules. I'm not sure if your paper is arguing for this. In the light of it lists seem a very powerful way to specify grammar to me. Not to mention explaining certain idiosyncratic and inconsistent aspects of grammar. In practice we have not used lists in this way. Any idea why not? Beyond that I don't entirely understand the importance of Cognitive Grammar for your analysis. Why is it necessary for generalizations to become entrenched before they can be thought of as being "part of the grammar"? Couldn't any meaningful generalization which might be abstracted from a set of examples already be considered to be "part of the grammar", independently of whether it later becomes entrenched? -Rob On Tue, Jul 1, 2008 at 11:09 AM, David Tuggy wrote: > If language users list/learn a generalization, why should I deny it them? > > As to how I personally do it: > http://www.sil.org/~tuggyd/Scarecrow/SCARECRO.htm is over twenty years old, > but I haven't seen any reason to shift from the basic position it sets > forth. See especially Fig 3, and Fig 5; note in Fig 5 how the V+O=S rule > (which Fig 3 shows to be itself a generalization over generalizations) is a > subcase of at least 6 higher-level generalizations. The same sort of thing > works with syntax or phonology as well. > > (A more recent version, contrasting the English set of rules/generalizations > with comparable Spanish ones, is available at > http://www.um.es/ijes/vol3n2/03-Tuggy.pdf ; there's a Spanish version and > powerpoint available too at www.sil.org/~tuggyd.) > > Do I know that all English speakers have abstracted all these rules? I > don't, in any absolute sense. But if they (any/most/a fortiori all of them) > have, I want them in my grammar. > > Note that there is a globally-general rule: Optionally add something to > something else to make a word. But the interesting stuff is the not-totally > general rules (like X+Y=Y "structure with the rightmost element as head"), > and the specific learned forms that prompt them. > > --David T From iadimly at usc.es Tue Jul 1 20:09:22 2008 From: iadimly at usc.es (=?iso-8859-1?Q?Mar=EDa_=C1ngeles_G=F3mez?=) Date: Tue, 1 Jul 2008 22:09:22 +0200 Subject: NEW BOOK: LANGUAGES AND CULTURES IN CONTRAST AND COMPARISON, Pragmatics & Beyond New Series 175 Message-ID: New book: Title: Languages and Cultures in Contrast and Comparison Publication Year: 2008 Publisher: John Benjamins, Pragmatics & Beyond New Series, 175 Book URL: http://www.benjamins.com/cgi-bin/t_bookview.cgi?bookid=P%26bns%20175 Editor: Mar?a de los ?ngeles G?mez Gonz?lez, J. Lachlan Mackenzie & Elsa M. Gonz?lez ?lvarez Hardbound: ISBN: 978 90 272 5419 1 Pages: xxii, 364 pp. Price: / EUR 105.00 / USD 158.00 Abstract: This volume explores various hitherto under-researched relationships between languages and their discourse-cultural settings. The first two sections analyze the complex interplay between lexico-grammatical organization and communicative contexts. Part I focuses on structural options in syntax, deepening the analysis of information-packaging strategies. Part II turns to lexical studies, covering such matters as human perception and emotion, the psychological understanding of 'home' and 'abroad', the development of children's emotional life and the relation between lexical choice and sexual orientation. The final chapters consider how new techniques of contrastive linguistics and pragmatics are contributing to the primary field of application for contrastive analysis, language teaching and learning. The book will be of special interest to scholars and students of linguistics, discourse analysis and cultural studies and to those entrusted with teaching European languages and cultures. The major languages covered are Akan, Dutch, English, Finnish, French, German, Italian, Norwegian, Spanish and Swedish. ******************************************* Mar?a de los ?ngeles G?mez Gonz?lez Full Professor of English Language and Linguistics Academic Secretary of Department English Department University of Santiago de Compostela Avda. Castelao s/n E-15704 Santiago de Compostela. Spain Tel: +34 981 563100 Ext. 11856 Fax: +34 981 574646 research team website: http://www.usc.es/scimitar/inicio.html. At present we are working on the update of this web page. From hdls at unm.edu Tue Jul 1 22:33:41 2008 From: hdls at unm.edu (High Desert Linguistics Society) Date: Tue, 1 Jul 2008 16:33:41 -0600 Subject: HDLS-8 2nd Call for Papers Message-ID: Hello everyone! Below you will find the second call for papers for the Eighth High Desert Linguistics Society Conference. I have also attached the call for papers in .doc format. Please pass this along to anyone who may be interested. Thank you! _____________________________________________________________________ The Eighth High Desert Linguistics Society Conference (HDLS-8) will be held at the University of New Mexico, Albuquerque, November, 6-8, 2008. Keynote speakers Sherman Wilcox (University of New Mexico) Marianne Mithun (University of California, Santa Barbara) Gilles Fauconnier (University of California, San Diego) We invite you to submit proposals for 20-minute talks with 5-minute discussion sessions in any area of linguistics ? especially those from a cognitive / functional linguistics perspective. This year we will include a poster session. Papers and posters in the following areas are particularly welcome: * Evolution of Language, Grammaticization, Metaphor and Metonymy, Typology, Discourse Analysis, Computational Linguistics, Language Change and Variation * Native American Languages, Spanish and Languages of the American Southwest, Language Revitalization and Maintenance * Sociolinguistics, Bilingualism, Signed Languages, First Language Acquisition, Second Language Acquisition, Sociocultural Theory The deadline for submitting abstracts is Friday, August 22nd, 2008. Abstracts should be sent via email, as an attachment, to hdls at unm.edu.Please include the title ''HDLS-8 abstract '' in the subject line. Include the title ?HDLS-8 Poster Session? in the subject line for abstracts submitted for the poster session. MS-Word format is preferred; RTF and PDF formats are accepted. You may also send hard copies of abstracts (three copies) to the HDLS address listed at the bottom of the page. The e-mail and attached abstract must include the following information: 1. Author's name(s) 2. Author's affiliation(s) 3. Title of the paper or poster 4. E-mail address of the primary author 5. A list of the equipment you will need 6. Whether you will require an official letter of acceptance The abstract should be no more than one page in no smaller than 11-point font. A second page is permitted for references and data. Only two submissions (for presentations) per author will be accepted and we will only consider submissions that conform to the above guidelines. If your abstract has special fonts or characters, please send your abstract as a PDF. Please be advised that shortly after the conference a call for proceedings will be announced. Poster Session - Participants will be given a space approximately 6' by 4' to display their work. Notification of acceptance will be sent out by September 2nd , 2008. If you have any questions or need for further information please contact us at hdls at unm.edu with ''HDLS-8 Conference'' in the subject line. You may also call Grandon Goertz, 505-277-6764 or Evan Ashworth, 505-228-4751. The HDLS mailing address is: HDLS, Department of Linguistics, MSC03 2130, 1 University of New Mexico, Albuquerque, NM. 87131-0001 USA From lists at chaoticlanguage.com Wed Jul 2 02:23:03 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Wed, 2 Jul 2008 10:23:03 +0800 Subject: Rules vs. Lists In-Reply-To: <486A440E.8060205@sil.org> Message-ID: Hi David, I agree you can see the now extensive "usage-based" literature as a historical use of "lists" to specify grammar. That is where this thread came from after all. But there is surely something missing. As you say, your own papers go back 20 or more years now. Why is there still argument? And the "rules" folks are right. A description of language which ignores generalizations is clearly incomplete. We don't say only what has been said before. And yet if you can abstract all the important information from lists of examples using rules, why keep the lists? So the argument goes round and round. What I think is missing is an understanding of the power lists can give you... to represent rules. That's why I asked how many rules you could abstract from a given list. While we believe abstracting rules will always be an economy, there will be people who argue lists are unnecessary. And quite rightly, if it were possible to abstract everything about a list of examples using a smaller set of rules, there would be no need to keep the examples. The point I'm trying to make here is that when placed in contrast with rules, what is important about lists might not be that you do not want to generalize them (as rules), but that a list may be a more compact representation for all the generalizations which can be made over it, than the generalizations (rules) are themselves. You express it perfectly: "...if you have examples A, B, C and D and extract a schema or rule capturing what is common to each pair, you have 6 potential rules, (AB, AC, AD, BC, BD, and CD), so sure, in theory you could have more rules than subcases." It is this power which I am wondering has never been used to specify grammar. You say you wouldn't "expect" to find this true in practice: "I wouldn't expect to find more rules than examples". But has anyone looked into it? It is possible in theory. Has anyone demonstrated it is not the case for real language data? Consider for a minute it might be true. What would that mean for the way we need to model language? I'll leave aside for a moment the other point about the importance of the concept of entrenchment from Cognitive Grammar. I think the raw point about the complexity, power, or number, of rules which can be generalized from a list of examples is the more important for now. I'd like to see arguments against this. -Rob On Tue, Jul 1, 2008 at 10:49 PM, David Tuggy wrote: > Thanks for the kind words, Rob. > > Right?I am not saying the number is totally unconstrained by anything, > though I agree it will be very, very large. What constrains it is whether > people learn it or not. > > Counting these things is actually pretty iffy, because (1) it implies a > discreteness or clear distinction of one from another that doesn't match the > reality, (2) it depends on the level of delicacy with which one examines the > phenomena, and (3) these things undoubtedly vary from one user to another > and even for the same user from time to time. The convention of representing > the generalization in a separate box from the subcase(s) is misleading in > certain ways: any schema (generalization or rule) is immanent to all its > subcases ?i.e. all its specifications are also specifications of the > subcases?, so a specific case cannot be activated without activating all the > specifications of all the schemas above it. The relationship is as close to > an identity relationship as you can get without full identity. (It is a if > not the major meaning of the verb "is" in English: a dog *is* a mammal, > running *is* bipedal locomotion, etc.) > > Langacker (2007: 433) says that "counting the senses of a lexical item is > analogous to counting the peaks in a mountain range: how many there are > depends on how salient they have to be before we count them; they appear > discrete only if we ignore how they grade into one another at lower > altitudes. The uncertainty often experienced in determining which particular > sense an expression instantiates on a given occasion is thus to be expected. > ?" > > If you do a topographical map a altitude intervals of one inch you will have > an awful lot of peaks. Perhaps even more rules than the number of examples > they're abstracted from. But normally, no, I wouldn't expect to find more > rules than examples, rather, fewer. It generally takes at least two examples > to clue us (more importantly as users than as linguists, but in either case) > in to the need for a rule, and the supposition that our interlocutors will > realize the need for that rule as well, and establish it (entrench it) in > their minds. Of course, as you point out, if you have examples A, B, C and D > and extract a schema or rule capturing what is common to each pair, you have > 6 potential rules, (AB, AC, AD, BC, BD, and CD), so sure, in theory you > could have more rules than subcases. Add in levels of schemas (rules > capturing what's common to AB-CD, AB-AC, ...) and you can get plenty of > rules. > > You wrote: In the light of [the possibility of more rules than > subcases]lists seem a very powerful way to specify grammar to me. Not to > mention explaining certain idiosyncratic and inconsistent aspects of > grammar. In practice we have not used lists in this way. Any idea why not? > > I'm not sure what you are saying here. If you're saying that listing > specific cases and ignoring omitting rules is enough, I disagree. If you're > saying that trying to specify grammar while ignoriing specific cases won't > work, I agree strongly. Listing specific cases is very important, as you > say, for explaining idiosyncratic and inconsistent aspects of the grammar > (as well as for other things, I would maintain.) I and many others have in > practice used lists in this way. (Read any of the Artes of Nahuatl or other > indigenous languages of Mexico from the XVI-XVII centuries: they have lots > of lists used in this way.) So I'm confused by what you're saying. > > The reason that generalizations must be entrenched is that (the grammar of) > a language consists of what has been entrenched in (learned by) the minds of > its users. If a linguist thinks of a rule, it has some place in his > cognition, but unless it corresponds to something in the minds of the > language's users, that is a relatively irrelevant fact. Cognitive Grammar > was important in that it affirmed this fact and in other ways provided a > framework in which the analysis was natural. > > --David Tuggy From amnfn at well.com Wed Jul 2 13:18:50 2008 From: amnfn at well.com (A. Katz) Date: Wed, 2 Jul 2008 06:18:50 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807011923u7573b978lc51e66a4d0f4bc48@mail.gmail.com> Message-ID: Rob, Here is where the concept of "functional equivalence" is very helpful. If two ways of describing a phenomenon give the same results, then they are functionally equivalent. That means that in essence, they are the same -- at least as far as results of calculation are concerned. (Considerations of processing limitations might show that one works better for a given hardware configuration than another, but that is a somewhat different issue.) Rules and lists are functionally equivalent. Logically speaking, they are the same. When there are more rules than examples of their application, we call it a list-based system. When there are many more examples of the application of a rule than different rules, then we call it a rule-based system. That's just about different methods of arriving at the same result, and is strictly a processing issue. In terms of describing the language, rather than the speakers, however, there is no difference. It's all the same. In order to appreciate this, we have to be able to distinguish the structure of the language from the structure of the speaker. Best, --Aya On Wed, 2 Jul 2008, Rob Freeman wrote: > Hi David, > > I agree you can see the now extensive "usage-based" literature as a > historical use of "lists" to specify grammar. That is where this > thread came from after all. > > But there is surely something missing. As you say, your own papers go > back 20 or more years now. Why is there still argument? > > And the "rules" folks are right. A description of language which > ignores generalizations is clearly incomplete. We don't say only what > has been said before. > > And yet if you can abstract all the important information from lists > of examples using rules, why keep the lists? > > So the argument goes round and round. > > What I think is missing is an understanding of the power lists can > give you... to represent rules. That's why I asked how many rules you > could abstract from a given list. > > While we believe abstracting rules will always be an economy, there > will be people who argue lists are unnecessary. And quite rightly, if > it were possible to abstract everything about a list of examples using > a smaller set of rules, there would be no need to keep the examples. > > The point I'm trying to make here is that when placed in contrast with > rules, what is important about lists might not be that you do not want > to generalize them (as rules), but that a list may be a more compact > representation for all the generalizations which can be made over it, > than the generalizations (rules) are themselves. > > You express it perfectly: > > "...if you have examples A, B, C and D and extract a schema or rule > capturing what is common to each pair, you have 6 potential rules, > (AB, AC, AD, BC, BD, and CD), so sure, in theory you could have more > rules than subcases." > > It is this power which I am wondering has never been used to specify grammar. > > You say you wouldn't "expect" to find this true in practice: "I > wouldn't expect to find more rules than examples". But has anyone > looked into it? It is possible in theory. Has anyone demonstrated it > is not the case for real language data? > > Consider for a minute it might be true. What would that mean for the > way we need to model language? > > I'll leave aside for a moment the other point about the importance of > the concept of entrenchment from Cognitive Grammar. I think the raw > point about the complexity, power, or number, of rules which can be > generalized from a list of examples is the more important for now. > > I'd like to see arguments against this. > > -Rob > > On Tue, Jul 1, 2008 at 10:49 PM, David Tuggy wrote: > > Thanks for the kind words, Rob. > > > > Right?I am not saying the number is totally unconstrained by anything, > > though I agree it will be very, very large. What constrains it is whether > > people learn it or not. > > > > Counting these things is actually pretty iffy, because (1) it implies a > > discreteness or clear distinction of one from another that doesn't match the > > reality, (2) it depends on the level of delicacy with which one examines the > > phenomena, and (3) these things undoubtedly vary from one user to another > > and even for the same user from time to time. The convention of representing > > the generalization in a separate box from the subcase(s) is misleading in > > certain ways: any schema (generalization or rule) is immanent to all its > > subcases ?i.e. all its specifications are also specifications of the > > subcases?, so a specific case cannot be activated without activating all the > > specifications of all the schemas above it. The relationship is as close to > > an identity relationship as you can get without full identity. (It is a if > > not the major meaning of the verb "is" in English: a dog *is* a mammal, > > running *is* bipedal locomotion, etc.) > > > > Langacker (2007: 433) says that "counting the senses of a lexical item is > > analogous to counting the peaks in a mountain range: how many there are > > depends on how salient they have to be before we count them; they appear > > discrete only if we ignore how they grade into one another at lower > > altitudes. The uncertainty often experienced in determining which particular > > sense an expression instantiates on a given occasion is thus to be expected. > > ?" > > > > If you do a topographical map a altitude intervals of one inch you will have > > an awful lot of peaks. Perhaps even more rules than the number of examples > > they're abstracted from. But normally, no, I wouldn't expect to find more > > rules than examples, rather, fewer. It generally takes at least two examples > > to clue us (more importantly as users than as linguists, but in either case) > > in to the need for a rule, and the supposition that our interlocutors will > > realize the need for that rule as well, and establish it (entrench it) in > > their minds. Of course, as you point out, if you have examples A, B, C and D > > and extract a schema or rule capturing what is common to each pair, you have > > 6 potential rules, (AB, AC, AD, BC, BD, and CD), so sure, in theory you > > could have more rules than subcases. Add in levels of schemas (rules > > capturing what's common to AB-CD, AB-AC, ...) and you can get plenty of > > rules. > > > > You wrote: In the light of [the possibility of more rules than > > subcases]lists seem a very powerful way to specify grammar to me. Not to > > mention explaining certain idiosyncratic and inconsistent aspects of > > grammar. In practice we have not used lists in this way. Any idea why not? > > > > I'm not sure what you are saying here. If you're saying that listing > > specific cases and ignoring omitting rules is enough, I disagree. If you're > > saying that trying to specify grammar while ignoriing specific cases won't > > work, I agree strongly. Listing specific cases is very important, as you > > say, for explaining idiosyncratic and inconsistent aspects of the grammar > > (as well as for other things, I would maintain.) I and many others have in > > practice used lists in this way. (Read any of the Artes of Nahuatl or other > > indigenous languages of Mexico from the XVI-XVII centuries: they have lots > > of lists used in this way.) So I'm confused by what you're saying. > > > > The reason that generalizations must be entrenched is that (the grammar of) > > a language consists of what has been entrenched in (learned by) the minds of > > its users. If a linguist thinks of a rule, it has some place in his > > cognition, but unless it corresponds to something in the minds of the > > language's users, that is a relatively irrelevant fact. Cognitive Grammar > > was important in that it affirmed this fact and in other ways provided a > > framework in which the analysis was natural. > > > > --David Tuggy > > From david_tuggy at sil.org Thu Jul 3 00:11:21 2008 From: david_tuggy at sil.org (David Tuggy) Date: Wed, 2 Jul 2008 19:11:21 -0500 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807011923u7573b978lc51e66a4d0f4bc48@mail.gmail.com> Message-ID: I'm afraid I'm not following you, Rob. Why is there still argument? Well, no reason *my* papers should have settled it, particularly, but from my viewpoint practically all the arguments I've seen have come because people still hanker after a simple choice of either rules or lists, but don't want to accept both. Why keep the lists? Because people (language users) do. And keep rules for the same reason. I don't know what you mean by "the power lists can give you ? to represent rules". I don't think that "abstracting rules will always be an economy." I don't think we do it to be economical, but because we like to see what things that we know have in common, and (secondarily in some logical sense) we like to make other things like them. I don't believe we (or at least I) extract all the rules our data might support. I've had, and I expect all of us have had, repeatedly, the experience of having someone point out a generality and immediately sensing either (a) Of course, I already half knew that, and (b) Whoa! Really?? I would *never* have seen that! (But sure enough, it?s there in the data.) The (a) response fits for me with the idea that I do in fact have some rules in my head and can recognize at least some of them, even when they are less than fully conscious. The (b) experience fits with the idea that not all possible generalizations are of that type: already there and only needing to be wakened into consciousness. If it should turn out that there are, entrenched in the minds of users of a language, more rules than pieces of data by some metric applied at some level, it wouldn't shake me up very badly. (By my lights the "data" themselves are schemas: *everything* that constitutes a language is a pattern, i.e. is schematic, is a generalization over specifics, is a rule.) If you're trying to argue that the rules are generated anew (all of them) whenever needed, I don't see any reason to think that is true, and several reasons for not thinking it. I don't see why "the complexity, power, or number of rules which can be generalized" is the only important point: to me the complexity, power or number of rules that actually are generalized, and entrenched as conventional in users' minds, is at least as important. It is only those rules, not the potential ones, that constitute the languages they speak. But as I say, I'm not sure I'm understanding you. --David Tuggy Rob Freeman wrote: > Hi David, > > I agree you can see the now extensive "usage-based" literature as a > historical use of "lists" to specify grammar. That is where this > thread came from after all. > > But there is surely something missing. As you say, your own papers go > back 20 or more years now. Why is there still argument? > > And the "rules" folks are right. A description of language which > ignores generalizations is clearly incomplete. We don't say only what > has been said before. > > And yet if you can abstract all the important information from lists > of examples using rules, why keep the lists? > > So the argument goes round and round. > > What I think is missing is an understanding of the power lists can > give you... to represent rules. That's why I asked how many rules you > could abstract from a given list. > > While we believe abstracting rules will always be an economy, there > will be people who argue lists are unnecessary. And quite rightly, if > it were possible to abstract everything about a list of examples using > a smaller set of rules, there would be no need to keep the examples. > > The point I'm trying to make here is that when placed in contrast with > rules, what is important about lists might not be that you do not want > to generalize them (as rules), but that a list may be a more compact > representation for all the generalizations which can be made over it, > than the generalizations (rules) are themselves. > > You express it perfectly: > > "...if you have examples A, B, C and D and extract a schema or rule > capturing what is common to each pair, you have 6 potential rules, > (AB, AC, AD, BC, BD, and CD), so sure, in theory you could have more > rules than subcases." > > It is this power which I am wondering has never been used to specify grammar. > > You say you wouldn't "expect" to find this true in practice: "I > wouldn't expect to find more rules than examples". But has anyone > looked into it? It is possible in theory. Has anyone demonstrated it > is not the case for real language data? > > Consider for a minute it might be true. What would that mean for the > way we need to model language? > > I'll leave aside for a moment the other point about the importance of > the concept of entrenchment from Cognitive Grammar. I think the raw > point about the complexity, power, or number, of rules which can be > generalized from a list of examples is the more important for now. > > I'd like to see arguments against this. > > -Rob > > From wilcox at unm.edu Thu Jul 3 00:20:27 2008 From: wilcox at unm.edu (Sherman Wilcox) Date: Wed, 2 Jul 2008 18:20:27 -0600 Subject: Rules vs. Lists In-Reply-To: <486C1929.60504@sil.org> Message-ID: On Jul 2, 2008, at 6:11 PM, David Tuggy wrote: > (By my lights the "data" themselves are schemas: *everything* that > constitutes a language is a pattern, i.e. is schematic, is a > generalization over specifics, is a rule.) Ah, I love this. -- Sherman Wilcox From lists at chaoticlanguage.com Thu Jul 3 00:29:31 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Thu, 3 Jul 2008 08:29:31 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: Aya, You seem to be implying there is already a large body of literature addressing this. Do you have any references for what you describe as "list-based" systems ("more rules than examples of their application"), in particular with reference to language? For the system to be non-trivial the rules should be implicit in the examples. I particularly want to think about what such a system would look like from the point of view of the examples (e.g. surely it would mean each example would be subject to interpretation in more than one way, a given interpretation dependent on context, etc.) -Rob On Wed, Jul 2, 2008 at 9:18 PM, A. Katz wrote: > Rob, > > Here is where the concept of "functional equivalence" is very helpful. If > two ways of describing a phenomenon give the same results, then they are > functionally equivalent. That means that in essence, they are the same -- > at least as far as results of calculation are concerned. (Considerations > of processing limitations might show that one works better for a given > hardware configuration than another, but that is a somewhat different > issue.) > > Rules and lists are functionally equivalent. Logically speaking, they are > the same. > > When there are more rules than examples of their application, we call it a > list-based system. When there are many more examples of the application of > a rule than different rules, then we call it a rule-based system. > > That's just about different methods of arriving at the same result, and is > strictly a processing issue. In terms of describing the language, rather > than the speakers, however, there is no difference. It's all the same. > > In order to appreciate this, we have to be able to distinguish the > structure of the language from the structure of the speaker. > > Best, > > > --Aya From dharv at mail.optusnet.com.au Thu Jul 3 01:57:34 2008 From: dharv at mail.optusnet.com.au (dharv at mail.optusnet.com.au) Date: Thu, 3 Jul 2008 11:57:34 +1000 Subject: Rules vs. Lists In-Reply-To: <486C1929.60504@sil.org> Message-ID: At 7:11 PM -0500 2/7/08, David Tuggy wrote: >If it should turn out that there are, entrenched in the minds of >users of a language, Do rules exist in the minds of language users or the minds of linguists? -- David Harvey 60 Gipps Street Drummoyne NSW 2047 Australia Tel: 61-2-9719-9170 From lists at chaoticlanguage.com Thu Jul 3 02:08:40 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Thu, 3 Jul 2008 10:08:40 +0800 Subject: Rules vs. Lists In-Reply-To: <486C1929.60504@sil.org> Message-ID: Thanks for playing the good part in this David. Very few people will even listen to a new (unentrenched? :-) argument. We are really in very broad agreement. It is just that I think there is something extra. On Thu, Jul 3, 2008 at 8:11 AM, David Tuggy wrote: > I'm afraid I'm not following you, Rob. Language is fallible, on that we can agree! > ...I don't see why "the complexity, power, or > number of rules which can be generalized" is the only important point: to me > the complexity, power or number of rules that actually are generalized, and > entrenched as conventional in users' minds, is at least as important. It is > only those rules, not the potential ones, that constitute the languages they > speak. I don't think "the complexity, power, or number of rules which can be generalized" is the only important point. I am only focusing on it because I think it is an important point we have been missing. But since you are not contesting this core complexity point, perhaps I should look at the importance you attach to entrenchment. I don't really want to attack the importance of entrenchment. Undoubtedly it is an important mechanism. I just don't think it is the only one. As you say "I expect all of us have had, repeatedly, the experience of having someone point out a generality and immediately sensing either (a) Of course, I already half knew that, and (b) Whoa! Really?? I would *never* have seen that! (But sure enough, it's there in the data.)" It is these experiences I am talking about. I agree that once a generality becomes entrenched through repeated observation, especially when it assumes a "negotiated" symbolic value in a community, then people can communicate using it. It is just I also think people can communicate by pointing out generalities which are not yet entrenched, might never have been observed at all, and which you might never have suspected of the data. But yet as soon as such a generality is pointed out, it is immediately "meaningful" to you ("I already half knew that".) In short, I think entrenchment is an important mechanism, but we need to pay attention to all the unentrenched generalities implicit in a language also. A vast power of generalities implicit in the examples of a language (more than there are examples), and those generalities immediately meaningful should we happen to observe them (though of course to observe them all impossible in practice), and we have what I am suggesting is missing from our current models of language. (Especially models which contrast rules vs. lists.) -Rob From david_tuggy at sil.org Thu Jul 3 02:30:23 2008 From: david_tuggy at sil.org (David Tuggy) Date: Wed, 2 Jul 2008 21:30:23 -0500 Subject: Rules vs. Lists In-Reply-To: Message-ID: In the minds of language users, including linguists. Hopefully what linguists consciously analyze and posit as rules will be such as to parallel what is in users' minds. Otherwise they are less interesting objects, at least for those who had hopes of their illuminating what language is. Certainly one cannot assume that every rule a linguist has posited has an analogue in anyone else?s mind, much less that such an analogue is used in actual language processing. But many, I reckon, do and are. --David Tuggy dharv at mail.optusnet.com.au wrote: > At 7:11 PM -0500 2/7/08, David Tuggy wrote: > >> If it should turn out that there are, entrenched in the minds of >> users of a language, > > Do rules exist in the minds of language users or the minds of linguists? From lists at chaoticlanguage.com Thu Jul 3 06:54:51 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Thu, 3 Jul 2008 14:54:51 +0800 Subject: Rules vs. Lists In-Reply-To: <486C3DE0.4080409@sil.org> Message-ID: On Thu, Jul 3, 2008 at 10:48 AM, David Tuggy wrote: > > **Yes. Such "tumbling to" moments are actually, I am sure, quite common > during the time when kids are learning 10 new words a day or however many it > is that they absorb. I certainly can attest to them when learning a > second/third/etc. language. But (a) They were not part of my language until > I had them, and (b) once I'd had them they are on the way to being > entrenched as conventional. When I encounter the same, or similar data again > I will recognize it. I think these "tumbling to" (ah-ha?) moments happen, on some level, every time we say something new. Indeed I think they are a model for how we say new things (to answer your question "Why?") Once something new has been said, it is on its way to being conventionalized. Eventually the original "tumbling to" meaning may become ossified and even replaced. I agree this conventionalization aspect has been well modeled by CG. It is also important, but is already being done well. I won't question what CG tells us about the social, conventionalized character of language. I'm only suggesting people consider this "tumbling to" aspect to language. If it occurs, how many such new generalizations might be made given a certain corpus of language examples etc. What it seeks to model are things which can be said. Whether something which can be said, only becomes "part of my language" once I have said it, is surely only a matter of definition. Just to rewind and recap a little. The question at issue here is how many generalizations/rules can be made about a list of examples. In particular whether there can be more, many more than there are examples. And the implications this might have for what can be said in a language. -Rob From amnfn at well.com Thu Jul 3 13:32:03 2008 From: amnfn at well.com (A. Katz) Date: Thu, 3 Jul 2008 06:32:03 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807021729o3ac722b7n8113af0a0dbfe557@mail.gmail.com> Message-ID: Rob, No, I am not implying there is a vast body of literature on this topic. My assurance comes from logic. A list is a set of rules with a single example of the application of each rule. When we speak of a rule-based system, we mean one each rules has many examples of its application. When we speak of list-based system, we speak of a system where there are more rules than instances where they are applied. For instance, the multiplication table can be described either way. We can memorize each entry and describe it as a list. Or we can give a single rule, x times y is x plus x y times. They are functionally equivalent. You can get the right answer either way. But when we make a separate rule for each instance, we call that listing. When we allow a single rule to cover many instances, we call that rule-based. It's doesn't take any previous literature to determine this is so. It is so by definition. It's a tautology. Best, --Aya The grammar of a language could On Thu, 3 Jul 2008, Rob Freeman wrote: > Aya, > > You seem to be implying there is already a large body of literature > addressing this. > > Do you have any references for what you describe as "list-based" > systems ("more rules than examples of their application"), in > particular with reference to language? > > For the system to be non-trivial the rules should be implicit in the examples. > > I particularly want to think about what such a system would look like > from the point of view of the examples (e.g. surely it would mean each > example would be subject to interpretation in more than one way, a > given interpretation dependent on context, etc.) > > -Rob > > On Wed, Jul 2, 2008 at 9:18 PM, A. Katz wrote: > > Rob, > > > > Here is where the concept of "functional equivalence" is very helpful. If > > two ways of describing a phenomenon give the same results, then they are > > functionally equivalent. That means that in essence, they are the same -- > > at least as far as results of calculation are concerned. (Considerations > > of processing limitations might show that one works better for a given > > hardware configuration than another, but that is a somewhat different > > issue.) > > > > Rules and lists are functionally equivalent. Logically speaking, they are > > the same. > > > > When there are more rules than examples of their application, we call it a > > list-based system. When there are many more examples of the application of > > a rule than different rules, then we call it a rule-based system. > > > > That's just about different methods of arriving at the same result, and is > > strictly a processing issue. In terms of describing the language, rather > > than the speakers, however, there is no difference. It's all the same. > > > > In order to appreciate this, we have to be able to distinguish the > > structure of the language from the structure of the speaker. > > > > Best, > > > > > > --Aya > > From amnfn at well.com Thu Jul 3 13:59:30 2008 From: amnfn at well.com (A. Katz) Date: Thu, 3 Jul 2008 06:59:30 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807022354t192b0ab5y3d47c4a3a2e1bf3d@mail.gmail.com> Message-ID: Concerning memory and entrenchment, I think that the ability to memorize a list is related to the ability to derive the list in the first place. They are not as separate psychologically as some discussions among linguists seem to assume. It is easier to memorize what you understand, because memory isn't completely passive. People who are talented at subjects such as math, music and languages are often complimented on their good memories, because they are able to come up with the individual items on a list faster. (The list could be a series of numbers as in the multiplication table, or a series of notes, as in a musical composition, or a series of words -- such as the words to a poem). People who are less talented at these tasks attempt passive memory work and fail. Then they attribute great memory ability to those who surpass them at these tasks. But in fact, those who do well are the ones who are able to re-derive any item they may have forgotten in a split second. To know the multiplication table well does involve memory, but is helped by the ability to instantly re-derive any entry one may have forgotten. Great musicians do memorize the notes to a composition, but they are greatly aided by their ability to anticipate what comes next. They can instantly recompose any phrase they may have forgotten, because they understand the regularity behind the composition. When we memorize a poem written by someone else, we often rely on metrical rules and rhyme schemes to recompose any lines we may have forgotten. Even in ordinary conversation, when people employ idioms, cliches and set phrases, those who can rederive them, who understand how they are put together, are ultimately more successful at employing them to greater effect. Talking about greater and lesser abilities in language use by native speakers has become taboo among linguists -- but not among people who teach literature and foreign languages. My observations here come from my own experiences with language use and literature and from experiences as a teacher. I suspect that they are echoed by the experiences of others, but it's not likely that you will find articles written about this by linguists. Best, --Aya On Thu, 3 Jul 2008, Rob Freeman wrote: > On Thu, Jul 3, 2008 at 10:48 AM, David Tuggy wrote: > > > > **Yes. Such "tumbling to" moments are actually, I am sure, quite common > > during the time when kids are learning 10 new words a day or however many it > > is that they absorb. I certainly can attest to them when learning a > > second/third/etc. language. But (a) They were not part of my language until > > I had them, and (b) once I'd had them they are on the way to being > > entrenched as conventional. When I encounter the same, or similar data again > > I will recognize it. > > I think these "tumbling to" (ah-ha?) moments happen, on some level, > every time we say something new. > > Indeed I think they are a model for how we say new things (to answer > your question "Why?") > > Once something new has been said, it is on its way to being > conventionalized. Eventually the original "tumbling to" meaning may > become ossified and even replaced. I agree this conventionalization > aspect has been well modeled by CG. It is also important, but is > already being done well. > > I won't question what CG tells us about the social, conventionalized > character of language. I'm only suggesting people consider this > "tumbling to" aspect to language. If it occurs, how many such new > generalizations might be made given a certain corpus of language > examples etc. > > What it seeks to model are things which can be said. Whether something > which can be said, only becomes "part of my language" once I have said > it, is surely only a matter of definition. > > Just to rewind and recap a little. The question at issue here is how > many generalizations/rules can be made about a list of examples. In > particular whether there can be more, many more than there are > examples. And the implications this might have for what can be said in > a language. > > -Rob > > From lists at chaoticlanguage.com Thu Jul 3 22:31:23 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Fri, 4 Jul 2008 06:31:23 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: On Thu, Jul 3, 2008 at 9:32 PM, A. Katz wrote: > ...When > we speak of list-based system, we speak of a system where there are more > rules than instances where they are applied. Can you give me even one example of such a system, Aya? For the system to be non-trivial the rules should be implicit in the examples. -Rob From dlevere at ilstu.edu Thu Jul 3 23:53:46 2008 From: dlevere at ilstu.edu (Daniel Everett) Date: Thu, 3 Jul 2008 18:53:46 -0500 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807031531u5fa022bcic84a1648e888bcc5@mail.gmail.com> Message-ID: I just 'published' a short bit on this as part of a debate on EDGE, responding to work by Chris Anderson. The discussion as a whole, not so much my reply, might interest FUNKNET readers. -- Dan http://www.edge.org/discourse/the_end_of_theory.html -------------------------------------------------------------- This message was sent using Illinois State University Webmail. From amnfn at well.com Fri Jul 4 02:42:25 2008 From: amnfn at well.com (A. Katz) Date: Thu, 3 Jul 2008 19:42:25 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807031531u5fa022bcic84a1648e888bcc5@mail.gmail.com> Message-ID: On Fri, 4 Jul 2008, Rob Freeman wrote: > On Thu, Jul 3, 2008 at 9:32 PM, A. Katz wrote: > > ...When > > we speak of list-based system, we speak of a system where there are more > > rules than instances where they are applied. > > Can you give me even one example of such a system, Aya? I think I already mentioned the multiplication tables. A computer program (or a human mind) that handles the mutiplication tables by listing the answers in a table is a list based system. A computer program that uses a subroutine to solve the problems with variables for x and y (where x*y is being calculated) is a rule based system -- and the same goes for a human mind that does this. Both systems are functionally equivalent and can give correct results. Each serves different processing constraints -- memory versus speed of calculating. If you want to be shown situations unlike the multiplication table where the data being processed tends to require one or the other type of system, think about the rules for spelling English versus the rules for spelling Spanish. The Spanish spelling system lends itself to rules, as it is highly regular. The English spelling system lends itself to lists, as it is highly irregular. It's not that English spelling has no rules -- it's just there are so darned many of them, that for the most frequently used words it's almost as if there is a different rule for every word. Not quite, but almost. Best, --Aya > > For the system to be non-trivial the rules should be implicit in the examples. > > -Rob > > From fgk at ling.helsinki.fi Fri Jul 4 07:32:31 2008 From: fgk at ling.helsinki.fi (fgk) Date: Fri, 4 Jul 2008 10:32:31 +0300 Subject: Rules vs. Lists In-Reply-To: Message-ID: As for the purported lack of linguistic research on greater and lesser abilities in language use by native speakers, I recommend consulting the book "Understanding Complex Sentences. Native Speaker Variation in Syntactic Competence" by Ngoni Chipere (Palgrave Macmillan, New York 2003). Best, Fred Karlsson From ab.stenstrom at telia.com Fri Jul 4 09:01:25 2008 From: ab.stenstrom at telia.com (=?iso-8859-1?Q?Anna-Brita_Stenstr=F6m?=) Date: Fri, 4 Jul 2008 11:01:25 +0200 Subject: unsubscribe Message-ID: Hello, I've been trying in vain lots of times to unsubscribe. Could you help me please? Best, Anna-Brita Stenstr?m ab.stenstrom at telia.com pixauv -- Jag anv?nder gratisversionen av SPAMfighter f?r privata anv?ndare. 5065 spam har blivit blockerade hittills. Betalande anv?ndare har inte detta meddelande i sin e-post. H?mta gratis SPAMfighter h?r: http://www.spamfighter.com/lsv From lists at chaoticlanguage.com Fri Jul 4 10:30:10 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Fri, 4 Jul 2008 18:30:10 +0800 Subject: Rules vs. Lists In-Reply-To: <20080703185346.eueqyffmog80cwog@isuwebmail.ilstu.edu> Message-ID: That's actually a pretty good reference Dan. Thanks. For my taste there's a little too much emphasis on the practical efficacy of the approach, and not enough on why it might be so, but the general idea is along the same lines. Do you know of any discussions why this might be the case: why for some systems we might need to eschew theory and work directly with examples? My own take is that for some systems there may be more rules implicit in the examples than there are examples themselves. So, not so much "The End of Theory" as the birth of the theory that there can be lots more theories buried in a set of data than we've ever imagined we needed to look for before. But people really don't like this kind of meta-theory, so I'm trying to keep it as concrete as possible. That's why I'm focusing on the practical problem of counting the number of rules you can abstract from a given set of examples. If it turns out there are more rules than examples, then that is something concrete we can deal with. -Rob On Fri, Jul 4, 2008 at 7:53 AM, wrote: > > > I just 'published' a short bit on this as part of a debate on EDGE, > responding to work by Chris Anderson. > > The discussion as a whole, not so much my reply, might interest FUNKNET > readers. > > -- Dan > > http://www.edge.org/discourse/the_end_of_theory.html > > > -------------------------------------------------------------- > This message was sent using Illinois State University Webmail. From amnfn at well.com Fri Jul 4 12:49:01 2008 From: amnfn at well.com (A. Katz) Date: Fri, 4 Jul 2008 05:49:01 -0700 Subject: Rules vs. Lists In-Reply-To: <486DD20F.3000802@ling.helsinki.fi> Message-ID: Yes, thank you for pointing that out. I did in fact know that Ngoni Chipere had done some research on that in the late 90's, but I lost touch, and I did not know there was a book out. I will definitely get the book. Best, --Aya Katz On Fri, 4 Jul 2008, fgk wrote: > As for the purported lack of linguistic research on greater and lesser > abilities in > language use by native speakers, I recommend consulting the book > "Understanding Complex Sentences. Native Speaker Variation in > Syntactic Competence" by Ngoni Chipere (Palgrave Macmillan, New York 2003). > Best, > Fred Karlsson > > From lists at chaoticlanguage.com Fri Jul 4 23:52:37 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Sat, 5 Jul 2008 07:52:37 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: On Fri, Jul 4, 2008 at 10:42 AM, A. Katz wrote: > > On Fri, 4 Jul 2008, Rob Freeman wrote: > >> On Thu, Jul 3, 2008 at 9:32 PM, A. Katz wrote: >> > ...When >> > we speak of list-based system, we speak of a system where there are more >> > rules than instances where they are applied. >> >> Can you give me even one example of such a system, Aya? > > ... English spelling ... it's almost as if there is a different rule for every word. Not > quite, but almost. I'm grateful you are thinking about this Aya, and it is indeed what I am suggesting that natural language is of this form, but English spelling, as you say, is probably only almost this way, but not quite. I'm still not sure you see what I mean by a system which has more rules implicit in the examples than there are examples themselves. Can you show me a system where there are more rules implicit in the examples than there are examples themselves, and explain why it must be so? -Rob From amnfn at well.com Sat Jul 5 13:13:52 2008 From: amnfn at well.com (A. Katz) Date: Sat, 5 Jul 2008 06:13:52 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807041652x7f27364fv4ea7e15280cbf66d@mail.gmail.com> Message-ID: Rob, MORE rules implicit than examples? That's a stretch, as a list of unitary items has at most as many rules as examples. However, I suppose the English derivational system might provide something like that. If English lexemes are listed one at a time, and most English speakers are unaware that they have subparts (and I've done research on this -- monolingual English speakers are amazingly imperceptive about derivations that are obvious to the rest of us), and if you then add the derivational rules that might account for some of these words, then you have a system where each item is a rule in itself, and also some rules for deriving the items, so there are more rules than examples. But... this is only so if you try to conflate the derivational insensitivity of the average English speaker with the patterns implicit in the words. So, in fact, this is not different from the system where a mathematically innocent child memorizes a multiplication table of whose derivation he is completely unaware. Each item listed in the table is a rule, and as he grows older and wiser, he may discover that there are other rules whereby the table could be derived. It's kind of a cheat, because we are listing from the point of view of more than one speaker, so that two systems overlap. But because the knowledge of speakers can evolve over time, such an overlap is not psychologically improbable. Best, --Aya On Sat, 5 Jul 2008, Rob Freeman wrote: > On Fri, Jul 4, 2008 at 10:42 AM, A. Katz wrote: > > > > On Fri, 4 Jul 2008, Rob Freeman wrote: > > > >> On Thu, Jul 3, 2008 at 9:32 PM, A. Katz wrote: > >> > ...When > >> > we speak of list-based system, we speak of a system where there are more > >> > rules than instances where they are applied. > >> > >> Can you give me even one example of such a system, Aya? > > > > ... English spelling ... it's almost as if there is a different rule for every word. Not > > quite, but almost. > > I'm grateful you are thinking about this Aya, and it is indeed what I > am suggesting that natural language is of this form, but English > spelling, as you say, is probably only almost this way, but not quite. > > I'm still not sure you see what I mean by a system which has more > rules implicit in the examples than there are examples themselves. > > Can you show me a system where there are more rules implicit in the > examples than there are examples themselves, and explain why it must > be so? > > -Rob > > From vch468d at tninet.se Sat Jul 5 14:29:30 2008 From: vch468d at tninet.se (Jouni Maho) Date: Sat, 5 Jul 2008 16:29:30 +0200 Subject: Rules vs. Lists Message-ID: May a lurker butt in with a little thought experiment? Re Rob Freeman's: > > Can you show me a system where there are more rules implicit in the > examples than there are examples themselves, and explain why it must > be so? Assume the following (complete) lexicon of a hypothetical language: berama bilama butaba metama tilaba And the rules: C > b r m l t V > e a u i Root > CVCV V2 > a R+ba > agent R+ma > causative That is, 5 list items, 6 rules. This assumes, of course, that the types of rules can be of any "kind", i.e. morphological, phonological, etc. Or does the question suppose that there should be restrictions on type of rules (only morphological, only phonological, etc.)? I'm not sure how easy this would be if the lexicon's size was considerably larger, but at least it's possible to devise a less-items-more-rules system as a thought experiment. I have no idea why it should be so, but it's certainly pissoble. By the way, would a vowel-consonant inventory (list) with it's accompanying rules (phonotax, assimilation, etc.) count as a valid less-items-more-rules system? --- jouni maho From amnfn at well.com Sat Jul 5 15:06:48 2008 From: amnfn at well.com (A. Katz) Date: Sat, 5 Jul 2008 08:06:48 -0700 Subject: Rules vs. Lists In-Reply-To: <486F854A.8050805@tninet.se> Message-ID: I assume that "the system" under consideration would be all inclusive of every item and every level, so this seems fair, although it's Rob that is leading this discussion on more rules than examples. Jouni Maho, you are implying there are roots, so as well as the lexicon, there would be a list of roots, presumably, and these would add to the number of rules. If there are roots, then presumably each root could appear with each suffix, (unless there's an additional rule that says that they can't) and there should be more lexemes than you listed. The question that seems more interesting to me is: could there ever be a human language with only five lexemes? If there could, why haven't we found one like that? Language is an information bearing code. The number of contrasts helps determine the amount of information transmitted. If there are fewer phonemes, then words have to be longer. If there are more phonemes, the same information can be transmitted in shorter words. More grammatical syntax allows for the same information to be coded in shorter sentences, in terms of word count. Less grammatical morphology requires more words per sentence. It all evens out, based on a very simple calculation. Languages of the world deploy the same basic phonological inventory inherent in our physiology in different ways in order to transmit about the same amount of information per time unit. Every language codes for a certain amount of redundancy in order to deal with noise in the signal. Redundancy could be viewed as adding extra rules that don't directly help with transmission of information. Is that what you are getting at, Rob? Best, --Aya Katz On Sat, 5 Jul 2008, Jouni Maho wrote: > May a lurker butt in with a little thought experiment? > > Re Rob Freeman's: > > > > Can you show me a system where there are more rules implicit in the > > examples than there are examples themselves, and explain why it must > > be so? > > Assume the following (complete) lexicon of a hypothetical language: > > berama > bilama > butaba > metama > tilaba > > And the rules: > > C > b r m l t > V > e a u i > Root > CVCV > V2 > a > R+ba > agent > R+ma > causative > > That is, 5 list items, 6 rules. This assumes, of course, that the types > of rules can be of any "kind", i.e. morphological, phonological, etc. Or > does the question suppose that there should be restrictions on type of > rules (only morphological, only phonological, etc.)? > > I'm not sure how easy this would be if the lexicon's size was > considerably larger, but at least it's possible to devise a > less-items-more-rules system as a thought experiment. I have no idea why > it should be so, but it's certainly pissoble. > > By the way, would a vowel-consonant inventory (list) with it's > accompanying rules (phonotax, assimilation, etc.) count as a valid > less-items-more-rules system? > > --- > jouni maho > > From lists at chaoticlanguage.com Sun Jul 6 06:16:39 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Sun, 6 Jul 2008 14:16:39 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: On Sat, Jul 5, 2008 at 9:13 PM, A. Katz wrote: > > MORE rules implicit than examples? That's a stretch, as a list of unitary > items has at most as many rules as examples. Thanks. I wanted you to see that. It is something different from what we have considered before. Not impossible, just not something which has been considered, to my knowledge. > However, I suppose the English derivational system might provide something > like that. If English lexemes are listed one at a time, and most English > speakers are unaware that they have subparts (and I've done research on > this -- monolingual English speakers are amazingly imperceptive about > derivations that are obvious to the rest of us), and if you then add the > derivational rules that might account for some of these words, then you > have a system where each item is a rule in itself, and also some rules for > deriving the items, so there are more rules than examples. But... this is > only so if you try to conflate the derivational insensitivity of the > average English speaker with the patterns implicit in the words. > > ... > > It's kind of a cheat, because we are listing from the point of view of > more than one speaker, so that two systems overlap. But because the > knowledge of speakers can evolve over time, such an overlap is not > psychologically improbable. Yes, if you regard each example as a rule in itself, and yet have productive rules over them, then almost trivially you will have more rules than examples. I don't think we need conflate speakers to do this. Most of us will accept that there is something unique about almost every utterance, while finding productive regularities over them. But it is not hard to find an argument that even the number of productive rules might be greater than the number of examples. As David pointed out in an earlier message: "...if you have examples A, B, C and D and extract a schema or rule capturing what is common to each pair, you have 6 potential rules, (AB, AC, AD, BC, BD, and CD), so sure, in theory you could have more rules than subcases. Add in levels of schemas (rules capturing what's common to AB-CD, AB-AC, ...) and you can get plenty of rules." The question is do we? -Rob From lists at chaoticlanguage.com Sun Jul 6 06:18:03 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Sun, 6 Jul 2008 14:18:03 +0800 Subject: Rules vs. Lists In-Reply-To: <486F854A.8050805@tninet.se> Message-ID: Right Jouni. It is somewhat laborious to factor them out, but I think we can find lots of rules once we start to look for them, even if we restrict ourselves to one kind: morphological, phonological etc. Whether a vowel-consonant inventory would count as this kind of system is another question. Phonemes are not really examples. They are classes. The right question would be to compare the number of phonemes and phonotactic rules with the number of utterances. In this context I refer you to suggestions such as Syd Lamb's that we need to relax the "linearity requirement" for phonemes in combination. An issue which goes right back to the core of the dispute between structural and functional schools in linguistics. As to why it should be so, why we should want to have more rules and examples, it is perhaps not immediately obvious. But actually such a system gives us lots of power. Power we have simply been throwing away because we have assumed fewer rules than examples. If we can constantly draw new rules out of the examples we can use all those extra rules to parametrize our system. All we need to do is look for them. -Rob On Sat, Jul 5, 2008 at 10:29 PM, Jouni Maho wrote: > May a lurker butt in with a little thought experiment? > > Re Rob Freeman's: >> >> Can you show me a system where there are more rules implicit in the >> examples than there are examples themselves, and explain why it must >> be so? > > Assume the following (complete) lexicon of a hypothetical language: > > berama > bilama > butaba > metama > tilaba > > And the rules: > > C > b r m l t > V > e a u i > Root > CVCV > V2 > a > R+ba > agent > R+ma > causative > > That is, 5 list items, 6 rules. This assumes, of course, that the types of > rules can be of any "kind", i.e. morphological, phonological, etc. Or does > the question suppose that there should be restrictions on type of rules > (only morphological, only phonological, etc.)? > > I'm not sure how easy this would be if the lexicon's size was considerably > larger, but at least it's possible to devise a less-items-more-rules system > as a thought experiment. I have no idea why it should be so, but it's > certainly pissoble. > > By the way, would a vowel-consonant inventory (list) with it's accompanying > rules (phonotax, assimilation, etc.) count as a valid less-items-more-rules > system? > > --- > jouni maho From lists at chaoticlanguage.com Sun Jul 6 06:20:44 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Sun, 6 Jul 2008 14:20:44 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: Aya, We have to be careful what we regard as "examples". As I said to Jouni phonemes should be thought of as classes not examples. Similarly "roots", "lexemes", "morphemes" etc. When speculating there are more rules than examples the real question is not how many ways can you combine X no. of lexemes, but how many lexemes can you abstract from Y utterances. And before you do that you have to define what you mean by "lexeme". What we have at root are a number of utterances with a certain amount of variation between them. You need that variation to carry a signal, as you say. But the number of lexemes you allocate will depend on where you slice that variation. Which slice of variation you allocate to lexemes, which to phonemes etc. To an extent it will be arbitrary. The distinction between a phoneme and a lexeme is not so clear in, for instance, tone languages. That said, if we decide the slice of variation we allocate to lexemes corresponds broadly to conventionalized meanings, it seems reasonable to me that there will be a fairly consistent number across cultures (perhaps tending a bit higher in highly conservative cultures.) You could certainly get by with only five. Computers use only two. But I doubt there will ever be a culture sufficiently innovative that it will want to think of new things to say quite that often! So the question of how many lexemes is largely one of how we choose to label the regularities we find. What I am suggesting is more basic than that. I'm suggesting that maybe when we break down utterances we have more regularities than we have thought to look for before, however we choose to label them. I don't think it is a question of redundancy, though all that extra information could be used to make the signal more robust. -Rob On Sat, Jul 5, 2008 at 11:06 PM, A. Katz wrote: > I assume that "the system" under consideration would be all inclusive of > every item and every level, so this seems fair, although it's Rob that is > leading this discussion on more rules than examples. > > Jouni Maho, you are implying there are roots, so as well as the lexicon, > there would be a list of roots, presumably, and these would add to the number > of rules. > > If there are roots, then presumably each root could appear with each > suffix, (unless there's an additional rule that says that they can't) and > there should be more lexemes than you listed. > > > The question that seems more interesting to me is: could there ever be a > human language with only five lexemes? If there could, why haven't we > found one like that? > > Language is an information bearing code. The number of contrasts helps > determine the amount of information transmitted. If there are fewer > phonemes, then words have to be longer. If there are more phonemes, the > same information can be transmitted in shorter words. More > grammatical syntax allows for the same information to be coded in > shorter sentences, in terms of word count. Less grammatical > morphology requires more words per sentence. It all evens out, based > on a very simple calculation. Languages of the world deploy the same > basic phonological inventory inherent in our physiology in different > ways in order to transmit about the same amount of information per > time unit. Every language codes for a certain amount of redundancy in order to > deal with noise in the signal. > > Redundancy could be viewed as adding extra rules that don't directly help > with transmission of information. Is that what you are getting at, Rob? > > Best, > > --Aya Katz From vch468d at tninet.se Sun Jul 6 13:04:34 2008 From: vch468d at tninet.se (Jouni Maho) Date: Sun, 6 Jul 2008 15:04:34 +0200 Subject: Rules vs. Lists Message-ID: Rob Freeman wrote: > > We have to be careful what we regard as "examples". As I > said to Jouni phonemes should be thought of as classes not > examples. Similarly "roots", "lexemes", "morphemes" etc. Well, you have to convince me why example-class is an important distinction to make here. I'm sorry if I seem to be running off on a tangent, but I understood the more-rules-less-examples thing as being about lists of items and the rules that apply to them, but perhaps you're actually talking about something else. Still, let me try to retract a bit, just to try to clarify to (for?) myself. When a language user extracts rules (generalisations) from a series of utterances, that assumes that the rule-extractor has analysed the utterances into an abstract list, so that each uttered "Hi!" is analysed as belonging to a set. Each generalisation (phones to a phoneme, many uttered "Hi!" to one abtract 'Hi!') is a rule, of course, but the abstract entities /a/ and "Hi!" themselves become units of a list on which other rules can apply. Hence also the rules themselves become members of lists. (Perhaps my earlier hypothetical example was not 5 items plus 6 rules, but rather 11 items including 6 rules.) Anyway, is "example" equal to the member of an abstract list ("Hi!" counts as one) or each uttered word ("Hi!" counts as many)? As a language user I make generalisations on various levels of abstraction. I can establish lexemes and phonemes from utterances, but I can also generalise syntactic ad morphological rules that apply to only certain classes of words or phonemes (which requires that I have made the example>class analysis first). So, does the distinction example-class really matter here? --- jouni maho From amnfn at well.com Sun Jul 6 16:20:29 2008 From: amnfn at well.com (A. Katz) Date: Sun, 6 Jul 2008 09:20:29 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807052320s49d5772bsf85d07f0a3624f6d@mail.gmail.com> Message-ID: Rob Freeman wrote: >What we have at root are a number of utterances with a certain amount >of variation between them. You need that variation to carry a signal, >as you say. But the number of lexemes you allocate will depend on >where you slice that variation. Which slice of variation you allocate >to lexemes, which to phonemes etc. To an extent it will be arbitrary. >The distinction between a phoneme and a lexeme is not so clear in, for >instance, tone languages. Why do you think the distinction between a phoneme and a lexeme is not so clear in tone languages? Isn't tone just one attribute out many that a vowel can have? >That said, if we decide the slice of variation we allocate to lexemes >corresponds broadly to conventionalized meanings, it seems reasonable >to me that there will be a fairly consistent number across cultures >(perhaps tending a bit higher in highly conservative cultures.) You >could certainly get by with only five. Computers use only two. But I >doubt there will ever be a culture sufficiently innovative that it >will want to think of new things to say quite that often! The fact that we can productively encode the information available in any utterance of any language using a binary code as in a computer does not mean that there are any human languages that actually employ a binary code of contrasts. The fact that we favor the decimal system over binary in our numerical calculations has something to do with the limitations of our working memory. For the same reason, there are no languages with only two phonemes, (much less just two morphemes or two lexemes or two clauses). Human language doesn't work that way in real time due to processing constraints. Best, --Aya From lists at chaoticlanguage.com Mon Jul 7 04:55:05 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Mon, 7 Jul 2008 12:55:05 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: Aya, You seem to have taken too seriously my little joke that no society would be sufficiently innovative to want only two conventionalized forms of speech. I'm sure there are all kinds of cognitive restraints which favor shorter sequences of more symbols. I always remember the Japanese colleague who said despairingly of English "The letters are easy, but there are just so many of them all together" :-) It would be a fun conversation to talk about what cognitive restraint fixed our common arithmetic base at exactly the most common number of fingers. Equally I would like to see how you allocate tone to a vowel in Chinese without first knowing the word. But I fear that all such argument about one systematization or another might take us away from the point I am trying to make here. The point I want to focus on is that, whatever your classification of elements, it may be possible to find more rules over combinations than there are combinations in the first place. -Rob On Mon, Jul 7, 2008 at 12:20 AM, A. Katz wrote: > Rob Freeman wrote: > >>What we have at root are a number of utterances with a certain amount >>of variation between them. You need that variation to carry a signal, >>as you say. But the number of lexemes you allocate will depend on >>where you slice that variation. Which slice of variation you allocate >>to lexemes, which to phonemes etc. To an extent it will be arbitrary. >>The distinction between a phoneme and a lexeme is not so clear in, for >>instance, tone languages. > > Why do you think the distinction between a phoneme and a lexeme is not so > clear in tone languages? Isn't tone just one attribute out many that a > vowel can have? > > >>That said, if we decide the slice of variation we allocate to lexemes >>corresponds broadly to conventionalized meanings, it seems reasonable >>to me that there will be a fairly consistent number across cultures >>(perhaps tending a bit higher in highly conservative cultures.) You >>could certainly get by with only five. Computers use only two. But I >>doubt there will ever be a culture sufficiently innovative that it >>will want to think of new things to say quite that often! > > The fact that we can productively encode the information available in any > utterance of any language using a binary code as in a computer does not mean > that there are any human languages that actually employ a binary code of > contrasts. > > The fact that we favor the decimal system over binary in our numerical > calculations has something to do with the limitations of our working > memory. For the same reason, there are no languages with only two > phonemes, (much less just two morphemes or two lexemes or two clauses). > > Human language doesn't work that way in real time due to processing > constraints. > > Best, > > --Aya > From lists at chaoticlanguage.com Mon Jul 7 04:58:30 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Mon, 7 Jul 2008 12:58:30 +0800 Subject: Rules vs. Lists In-Reply-To: <4870C2E2.20803@tninet.se> Message-ID: Maybe you are right Jouni. Maybe the distinction example-class does not matter. Perhaps a better way to make my point is to say we need to focus on ways of breaking things down, rather than ways of putting things together. We can take it to another level and accept that "utterances" too need to be thought of as classes if you like. Accept we need to segment them from some global speech act. Or for convenience we can assume a level of phonemic or lexical abstraction and think only about how rules can be abstracted from sequences of those. What is important for my point is that we think about ways combinations of elements can be abstracted from wholes at any level. I wasn't sure if what you wanted to do was define X phonemes and then argue there can be >> X rules governing their combinations. Trivially that is so. The problem is we can't find any actual set of rules which completely explain all the combinations of phonemes we get. I want to turn that around. The interesting question for me is given Y combinations of phonemes: abcd.., aabc.., acdb.... how many generalizations/rules can we find between those combinations? More than Y? -Rob On Sun, Jul 6, 2008 at 9:04 PM, Jouni Maho wrote: > Rob Freeman wrote: >> >> We have to be careful what we regard as "examples". As I >> said to Jouni phonemes should be thought of as classes not >> examples. Similarly "roots", "lexemes", "morphemes" etc. > > Well, you have to convince me why example-class is an important distinction > to make here. > > I'm sorry if I seem to be running off on a tangent, but I understood the > more-rules-less-examples thing as being about lists of items and the rules > that apply to them, but perhaps you're actually talking about something > else. > > Still, let me try to retract a bit, just to try to clarify to (for?) myself. > > When a language user extracts rules (generalisations) from a series of > utterances, that assumes that the rule-extractor has analysed the utterances > into an abstract list, so that each uttered "Hi!" is analysed as belonging > to a set. > > Each generalisation (phones to a phoneme, many uttered "Hi!" to one abtract > 'Hi!') is a rule, of course, but the abstract entities /a/ and "Hi!" > themselves become units of a list on which other rules can apply. Hence also > the rules themselves become members of lists. (Perhaps my earlier > hypothetical example was not 5 items plus 6 rules, but rather 11 items > including 6 rules.) > > Anyway, is "example" equal to the member of an abstract list ("Hi!" counts > as one) or each uttered word ("Hi!" counts as many)? As a language user I > make generalisations on various levels of abstraction. I can establish > lexemes and phonemes from utterances, but I can also generalise syntactic ad > morphological rules that apply to only certain classes of words or phonemes > (which requires that I have made the example>class analysis first). So, does > the distinction example-class really matter here? > > --- > jouni maho From Vyv.Evans at brighton.ac.uk Mon Jul 7 09:42:17 2008 From: Vyv.Evans at brighton.ac.uk (Vyvyan Evans) Date: Mon, 7 Jul 2008 10:42:17 +0100 Subject: 'Language & Cognition': New journal website now live Message-ID: Dear Colleagues. We are delighted to announce that the website for the new journal: 'Language & Cognition' is now live. Please check out the website for full details on the journal: www.languageandcognition.net/journal/ The journal is provided to all members of the UK-Cognitive Linguistics Association. Membership of the Association is free for 2009 and available at a 50% reduction for 2010. Membership application details will be available soon on the journal website. All are welcome to join the Association regardless of nationality or geographical location. The table of contents for 2009 and 2010 is detailed below: Volume 1 (2009) Issue 1 How infants build a semantic system. Kim Plunkett (University of Oxford) The cognitive poetics of literary resonance. Peter Stockwell (University of Nottingham) Action in cognition: The case of language. Lawrence J. Taylor and Rolf A. Zwaan (Erasmus University of Rotterdam) Prototype constructions in early language development. Paul Ibbotson (University of Manchester) and Michael Tomasello (MPI for Evolutionary Anthropology, Leipzig) The Enactment of Language: 20 Years of Interactions Between Linguistic and Motor Processes. Michael Spivey (University of California, Merced) and Sarah Anderson (Cornell University) Episodic affordances contribute to language comprehension. Arthur M. Glenberg (Arizona State Universtiy), Raymond Becker (Wilfrid Laurier University), Susann Kl?tzer, Lidia Kolanko, Silvana M?ller (Dresden University of Technology), and Mike Rinck (Radboud University Nijmegen) Reviews: Daniel D. Hutto. 2008. Folk Psychological Narratives: The Sociocultural Basis of Understanding Reasons (MIT Press). Reviewed by Chris Sinha Aniruddh Patel. 2008. Music, Language, and the Brain (Oxford Univeristy Press). Reviewed by Daniel Casasanto Issue 2 Pronunciation reflects syntactic probabilities: Evidence from spontaneous speech. Harry Tily (Stanford University), Susanne Gahl (University of California, Berkeley), Inbal Arnon, Anubha Kothari, Neal Snider and Joan Bresnan (Stanford University) Causal agents in English, Korean and Chinese: The role of internal and external causation. Phillip Wolff, Ga-hyun Jeon, and Yu Li (Emory University) Ontology as correlations: How language and perception interact to create knowledge. Linda Smith (Indiana University) and Eliana Colunga (University of Colorado at Boulder) Toward a theory of word meaning. Gabriella Vigliocco, Lotte Meteyard and Mark Andrews (University College London) Spatial language in the brain. Mikkel Wallentin (University of Aarhus) The neural basis of semantic memory: Insights from neuroimaging. Uta Noppeney (MPI for Biological Cybernetics, Tuebingen) Reviews: Ronald Langacker. 2008. Cognitive Grammar: A basic introduction. (Oxford University Press). Reviewed by Vyvyan Evans Giacomo Rizzolatti and Corrado Sinigagalia. Mirrors in the brain: How our minds share actions and emotions. 2008. (Oxford University Press). Reviewed by David Kemmerer. Volume 2 (2010) Issue 1 Adaptive cognition without massive modularity: The context-sensitivity of language use. Raymond W. Gibbs (University of California, Santa Cruz) and Guy Van Orden (University of Cincinnati) Spatial foundations of the conceptual system. Jean Mandler (University California, San Diego and University College London) Metaphor: Old words, new concepts, imagined worlds. Robyn Carston (University College London) Language Development and Linguistic Relativity. John A. Lucy (University of Chicago) Construction Learning. Adele Goldberg (Princeton University) Space and Language: some neural considerations. Anjan Chatterjee (University of Pennsylvania) Issue 2 What can language tell us about psychotic thought? Gina Kuperberg (Tufts University) Abstract motion is no longer abstract. Teenie Matlock (University California, Merced) When gesture does and doesn't promote learning. Susan Goldin-Meadow (University of Chicago) Discourse Space Theory. Paul Chilton (Lancaster University) Relational language supports relational cognition. Dedre Gentner (Northwestern University) Talking about quantities in space. Kenny Coventry (Northumbria University). Sincerely, Vyv Evans President, UK-CLA --------------------------------------------------- Vyv Evans Professor of Cognitive Linguistics www.vyvevans.net From amnfn at well.com Mon Jul 7 13:05:42 2008 From: amnfn at well.com (A. Katz) Date: Mon, 7 Jul 2008 06:05:42 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807062155h26a2aa66lb0317ae2698409e4@mail.gmail.com> Message-ID: Okay. Your point is that the linguistic pie can be sliced many, many different ways. I don't disagree, but I have another point that I have been trying to make: there is no difference between one method of slicing it or another, when we are studying how a language works. If it all adds up correctly, all the different ways are equivalent, and there's not any reason to prefer one method over another, unless we have adopted a particular constraint, such as economy of rules or mathematical elegance. Now, a particular speaker may adopt one way, and another speaker may adopt a second. A third speaker may adopt a third. There may be as many different ways of parsing a language as speakers, although that is doubtful and perfectly open to scientific investigation. It's okay to study the details of how speakers process language. It is also okay to find ways to describe language apart from speakers. What is not okay is to confuse what any given speaker does with how the language works. Best, --Aya On Mon, 7 Jul 2008, Rob Freeman wrote: > Aya, > > You seem to have taken too seriously my little joke that no society > would be sufficiently innovative to want only two conventionalized > forms of speech. I'm sure there are all kinds of cognitive restraints > which favor shorter sequences of more symbols. I always remember the > Japanese colleague who said despairingly of English "The letters are > easy, but there are just so many of them all together" :-) > > It would be a fun conversation to talk about what cognitive restraint > fixed our common arithmetic base at exactly the most common number of > fingers. Equally I would like to see how you allocate tone to a vowel > in Chinese without first knowing the word. But I fear that all such > argument about one systematization or another might take us away from > the point I am trying to make here. The point I want to focus on is > that, whatever your classification of elements, it may be possible to > find more rules over combinations than there are combinations in the > first place. > > -Rob > > On Mon, Jul 7, 2008 at 12:20 AM, A. Katz wrote: > > Rob Freeman wrote: > > > >>What we have at root are a number of utterances with a certain amount > >>of variation between them. You need that variation to carry a signal, > >>as you say. But the number of lexemes you allocate will depend on > >>where you slice that variation. Which slice of variation you allocate > >>to lexemes, which to phonemes etc. To an extent it will be arbitrary. > >>The distinction between a phoneme and a lexeme is not so clear in, for > >>instance, tone languages. > > > > Why do you think the distinction between a phoneme and a lexeme is not so > > clear in tone languages? Isn't tone just one attribute out many that a > > vowel can have? > > > > > >>That said, if we decide the slice of variation we allocate to lexemes > >>corresponds broadly to conventionalized meanings, it seems reasonable > >>to me that there will be a fairly consistent number across cultures > >>(perhaps tending a bit higher in highly conservative cultures.) You > >>could certainly get by with only five. Computers use only two. But I > >>doubt there will ever be a culture sufficiently innovative that it > >>will want to think of new things to say quite that often! > > > > The fact that we can productively encode the information available in any > > utterance of any language using a binary code as in a computer does not mean > > that there are any human languages that actually employ a binary code of > > contrasts. > > > > The fact that we favor the decimal system over binary in our numerical > > calculations has something to do with the limitations of our working > > memory. For the same reason, there are no languages with only two > > phonemes, (much less just two morphemes or two lexemes or two clauses). > > > > Human language doesn't work that way in real time due to processing > > constraints. > > > > Best, > > > > --Aya > > > > From amnfn at well.com Mon Jul 7 13:19:20 2008 From: amnfn at well.com (A. Katz) Date: Mon, 7 Jul 2008 06:19:20 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807062155h26a2aa66lb0317ae2698409e4@mail.gmail.com> Message-ID: Rob Freeman wrote: >It would be a fun conversation to talk about what cognitive restraint >fixed our common arithmetic base at exactly the most common number of >fingers. It may be that we could do arithmetic with base nine or base eleven just as well, and we chose the exact number of our fingers to make up the decimal system. But the fact that we didn't choose base two isn't on account of the number of fingers we have. We have two hands, after all, and we could have used them to count in base 2. >Equally I would like to see how you allocate tone to a vowel >in Chinese without first knowing the word. But I fear that all such The fact that the tone of a word in Chinese is part of its lexicon entry does not in any way take away from the phonemic status of tone. You might as well say that you can't allocate consonants to the onset of a syllable in English without knowing which word it is. Of course, you can't. Monomorphemic words are made up of a list of phonemes. (Or, if you like, morphemes are made of phonemes.) The list is different for each monomorphemic word. That doesn't take away the phonemic status of the units in the list. Right? --Aya From JVanness at iie.org Mon Jul 7 17:40:11 2008 From: JVanness at iie.org (Vanness, Justin) Date: Mon, 7 Jul 2008 13:40:11 -0400 Subject: Fulbright Awards in TEFL 2009-10 Message-ID: Good news - the Fulbright Scholar Program is featuring Teaching English as a Foreign Language (TEFL) awards in nearly every world region for the 2009-10 competition that is currently underway. Consider a Fulbright grant for lecturing, researching, or both. In Latin America, grants are available in Panama, Venezuela, Mexico, Nicaragua, Guatemala, Honduras, and Chile. In Africa, there are grants in Cote d'Ivorie and Mauritius. And in Asia, there are grants in Indonesia, Kyrgyz Republic, Turkmenistan, Uzbekistan, Taiwan, and Mongolia. While each grant is different, a brief sampling of topics of interest includes methodology, communications techniques, textbook analysis, language learning software, and English for professional purposes. While language and teaching experience preferences vary, English is sufficient in most cases. Applications for 2009-2010 are due by August 1, 2008. US citizenship and a Ph.D. or its equivalent terminal degree are required. Apply online at: http://www.cies.org/us_scholars/us_awards/ Contact Joseph Graff (jgraff at cies.iie.org; 202.686.6239) or Carol Robles (crobles at cies.iie.org; 202.686.6238) regarding Latin American awards; Debra Egan (degan at cies.iie.org; 202.686.6230) regarding African awards; and Michael Zdanovich (mzdanovich at cies.iie.org; 202.686.7873) regarding Asian awards. From lists at chaoticlanguage.com Tue Jul 8 00:36:00 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Tue, 8 Jul 2008 08:36:00 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: My point is partially that "the linguistic pie can be sliced many ways". Thanks for acknowledging that. But there is more. Something happens when there can be more ways of slicing than there are things to slice. There is another aspect. The idea of more rules than examples seems only surprising at first, but it is important. The thing is, if there can be more rules than examples, you can be never done slicing. That's because for every list of "slices" you make there can be another, longer, list to be made. Each list of rules you make either constitutes, or produces, an even longer list, which implies a longer list etc. Such a system operates on itself to constantly produce complexity. It's just a quirk of the system, but very nice, because it predicts change, drift, etc, and also gives us considerable scope for complexity, "new ideas", even "free will" if you like. (The system is less specified than one which can be abstracted with a smaller number of rules, it is unstable, even random at one level, liable to go off at tangents and develop in completely different ways, produce different languages etc.) So it is not quite that "there is no difference between one method of slicing it or another." Because no set of slices is complete. Each set, list, of "slices" always implies another, larger, set. More than one larger set actually, should we choose to look for them. I think this is right. It seems to be the case. Worth investigating, anyway. Whether there is no end to the list of grammars which linguists can derive may be questioned by some, but it seems sure there is no end to things that can be said. I'm not sure if the premise of a set without end is open to scientific investigation. It is hard to falsify. Fortunately it is easy to go to the other end of the problem and explore whether you can find more and more rules from a given set of examples. It seems quite a small thing, that there might be rules than examples, but it has consequences that imply a qualitative difference in how language works which go beyond what any given speaker does. I think we should look at the possibility carefully. -Rob P.S. We can look at the significance of tone for the analycity of phonemes if you like. It may be relevant to the idea of more rules than examples. But I would like to hear other people's opinions first. In particular I'd like to hear how this relates in standard theory to that other problem of phonemes being modified in context, voicing assimilation in Russian obstruent clusters was it, the classic example of this? On Mon, Jul 7, 2008 at 9:05 PM, A. Katz wrote: > Okay. Your point is that the linguistic pie can be sliced many, many > different ways. I don't disagree, but I have another point that I have > been trying to make: there is no difference between one method of > slicing it or another, when we are studying how a language works. If it > all adds up correctly, all the different ways are equivalent, and there's > not any reason to prefer one method over another, unless we have adopted a > particular constraint, such as economy of rules or mathematical elegance. > > > Now, a particular speaker may adopt one way, and another speaker may adopt > a second. A third speaker may adopt a third. There may be as many > different ways of parsing a language as speakers, although that is > doubtful and perfectly open to scientific investigation. > > It's okay to study the details of how speakers process language. It is > also okay to find ways to describe language apart from speakers. What is > not okay is to confuse what any given speaker does with how the language > works. > > > Best, > > --Aya From amnfn at well.com Tue Jul 8 15:35:04 2008 From: amnfn at well.com (A. Katz) Date: Tue, 8 Jul 2008 08:35:04 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807071736t2f9d8d18vd9752ea9f7671b94@mail.gmail.com> Message-ID: Okay. So your other point is that the grammar of any language at any given point is somewhat indeterminate, because it just misses resolving itself one way or the other. That's true, of course. Sapir made that point when he spoke of linguistic drift. What I find more interesting, (while acknowledging your point), is that languages don't just drift. They cycle. They keep coming up with the same ways of resolving the indeterminacy, after having seemingly gone in a different direction for a while. The reason I find that interesting is because all the while language appears to be evolving, it's really staying the same more or less. Show me one primitive language! There is none to be found. We have people with primitive material culture in isolated pockets of the world, but NO primitive languages. Best, --Aya On Tue, 8 Jul 2008, Rob Freeman wrote: > My point is partially that "the linguistic pie can be sliced many > ways". Thanks for acknowledging that. > > But there is more. Something happens when there can be more ways of > slicing than there are things to slice. > > There is another aspect. The idea of more rules than examples seems > only surprising at first, but it is important. The thing is, if there > can be more rules than examples, you can be never done slicing. That's > because for every list of "slices" you make there can be another, > longer, list to be made. Each list of rules you make either > constitutes, or produces, an even longer list, which implies a longer > list etc. > > Such a system operates on itself to constantly produce complexity. > > It's just a quirk of the system, but very nice, because it predicts > change, drift, etc, and also gives us considerable scope for > complexity, "new ideas", even "free will" if you like. (The system is > less specified than one which can be abstracted with a smaller number > of rules, it is unstable, even random at one level, liable to go off > at tangents and develop in completely different ways, produce > different languages etc.) > > So it is not quite that "there is no difference between one method of > slicing it or another." Because no set of slices is complete. Each > set, list, of "slices" always implies another, larger, set. More than > one larger set actually, should we choose to look for them. > > I think this is right. It seems to be the case. Worth investigating, anyway. > > Whether there is no end to the list of grammars which linguists can > derive may be questioned by some, but it seems sure there is no end to > things that can be said. I'm not sure if the premise of a set without > end is open to scientific investigation. It is hard to falsify. > Fortunately it is easy to go to the other end of the problem and > explore whether you can find more and more rules from a given set of > examples. > > It seems quite a small thing, that there might be rules than examples, > but it has consequences that imply a qualitative difference in how > language works which go beyond what any given speaker does. > > I think we should look at the possibility carefully. > > -Rob > > P.S. We can look at the significance of tone for the analycity of > phonemes if you like. It may be relevant to the idea of more rules > than examples. But I would like to hear other people's opinions first. > In particular I'd like to hear how this relates in standard theory to > that other problem of phonemes being modified in context, voicing > assimilation in Russian obstruent clusters was it, the classic example > of this? > > On Mon, Jul 7, 2008 at 9:05 PM, A. Katz wrote: > > Okay. Your point is that the linguistic pie can be sliced many, many > > different ways. I don't disagree, but I have another point that I have > > been trying to make: there is no difference between one method of > > slicing it or another, when we are studying how a language works. If it > > all adds up correctly, all the different ways are equivalent, and there's > > not any reason to prefer one method over another, unless we have adopted a > > particular constraint, such as economy of rules or mathematical elegance. > > > > > > Now, a particular speaker may adopt one way, and another speaker may adopt > > a second. A third speaker may adopt a third. There may be as many > > different ways of parsing a language as speakers, although that is > > doubtful and perfectly open to scientific investigation. > > > > It's okay to study the details of how speakers process language. It is > > also okay to find ways to describe language apart from speakers. What is > > not okay is to confuse what any given speaker does with how the language > > works. > > > > > > Best, > > > > --Aya > > From lists at chaoticlanguage.com Wed Jul 9 07:49:34 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Wed, 9 Jul 2008 15:49:34 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: You keep wanting to change the subject, Aya. It is good that you accept "the grammar of any language at any given point is somewhat indeterminate." I wonder how many people would agree. But note, I'm not only suggesting the existence of indeterminacy in human language, I'm suggesting a model to explain it. This is something which has historically fallen between complete explanation in terms of rules and lists/usage. In fact nearly everything about language has fallen between complete description in terms of rules and lists/usage. I suggest this is because we have not considered the possibility of a list which implies more rules than it has elements. If you want to address that hypothesis, or its consequences, I would welcome your feedback. It seems you want to talk about cognitive or social constants in language. We can start another thread to talk about cognitive or social constants in human language if you like. Though really, I think many people have done quite a thorough job on that aspect of language already. Perhaps you have a new perspective. By all means start a new thread and present it. -Rob On Tue, Jul 8, 2008 at 11:35 PM, A. Katz wrote: > Okay. So your other point is that the grammar of any language at any given > point is somewhat indeterminate, because it just misses resolving itself > one way or the other. That's true, of course. Sapir made that point when > he spoke of linguistic drift. > > What I find more interesting, (while acknowledging your point), is that > languages don't just drift. They cycle. They keep coming up with the same > ways of resolving the indeterminacy, after having seemingly gone in a > different direction for a while. > > The reason I find that interesting is because all the while language > appears to be evolving, it's really staying the same more or less. > > Show me one primitive language! There is none to be found. We have people > with primitive material culture in isolated pockets of the world, but NO > primitive languages. > > Best, > > --Aya From amnfn at well.com Wed Jul 9 14:02:38 2008 From: amnfn at well.com (A. Katz) Date: Wed, 9 Jul 2008 07:02:38 -0700 Subject: Rules vs. Lists In-Reply-To: <7616afbc0807090049t6a00ce9dp3fd15e64bf2c476a@mail.gmail.com> Message-ID: Okay, Rob. So you would like to stick with your topic. Do you have a formalism to deal with the more-rules-than-examples scenario? How do we count the examples and the rules? What are the more specific implications to any particular language? Have you already (or are you in the process of) applying this outlook to a single natural language in order to harvest all the examples and all the rules? If you have written any papers on this topic, would you care to share them with us? I am currently in the process of writing a book entitled CYCLES IN LANGUAGE. The topic is language change/evolution, and the main observation is that as much as language changes, it stays remarkably the same. In some of the beginning chapters, I and my co-author June Sun, discuss different formalisms for accounting for grammar, and we specifically discuss the concept of functional equivalence. We would be happy to include your outlook on more-examples-than-rules, if there are papers to cite. Best, --Aya On Wed, 9 Jul 2008, Rob Freeman wrote: > You keep wanting to change the subject, Aya. > > It is good that you accept "the grammar of any language at any given > point is somewhat indeterminate." > > I wonder how many people would agree. > > But note, I'm not only suggesting the existence of indeterminacy in > human language, I'm suggesting a model to explain it. This is > something which has historically fallen between complete explanation > in terms of rules and lists/usage. In fact nearly everything about > language has fallen between complete description in terms of rules and > lists/usage. I suggest this is because we have not considered the > possibility of a list which implies more rules than it has elements. > > If you want to address that hypothesis, or its consequences, I would > welcome your feedback. > > It seems you want to talk about cognitive or social constants in language. > > We can start another thread to talk about cognitive or social > constants in human language if you like. Though really, I think many > people have done quite a thorough job on that aspect of language > already. Perhaps you have a new perspective. By all means start a new > thread and present it. > > -Rob > > On Tue, Jul 8, 2008 at 11:35 PM, A. Katz wrote: > > Okay. So your other point is that the grammar of any language at any given > > point is somewhat indeterminate, because it just misses resolving itself > > one way or the other. That's true, of course. Sapir made that point when > > he spoke of linguistic drift. > > > > What I find more interesting, (while acknowledging your point), is that > > languages don't just drift. They cycle. They keep coming up with the same > > ways of resolving the indeterminacy, after having seemingly gone in a > > different direction for a while. > > > > The reason I find that interesting is because all the while language > > appears to be evolving, it's really staying the same more or less. > > > > Show me one primitive language! There is none to be found. We have people > > with primitive material culture in isolated pockets of the world, but NO > > primitive languages. > > > > Best, > > > > --Aya > > From lists at chaoticlanguage.com Thu Jul 10 02:34:38 2008 From: lists at chaoticlanguage.com (Rob Freeman) Date: Thu, 10 Jul 2008 10:34:38 +0800 Subject: Rules vs. Lists In-Reply-To: Message-ID: Aya, Thanks for asking. The basic complexity ideas need not be limited to any formalism, but I have a formalism. It is closest conceptually to grammatical induction by distributional analysis. The main difference is that while classical distributional analysis seeks to abstract classes to fit an entire corpus, I only attempt to fit one sentence at a time. It turns out the different orders of associating words to fit a new sentence give very different results. A parse structure falls naturally out of the process of selecting the best order. I used this principle to implement a kind of parser. There is a Web-based demo. If you have server space I could set it up for you. Failing that you can see some examples of the kind of output you get at http://www.chaoticlanguage.com/flat_site/index.html. Currently it has only been implemented for English, Chinese, and Danish. Because I think this power to combine in different ways only becomes crucial above the "word" level (defining that level by contrast), I generally "list" only associations of words. Though I have done some experiments for Chinese on recording associations at the character level (at which point the "parser" becomes a word segmentation algorithm.) So generally "examples" in my implementation are words and lists of their associations. It is impossible to count the number of "rules" or different orderings you might project out. In theory the number is very high, as David Tuggy noted. The basic insights are quite general to any language, though for morphologically rich languages an implementation based on traditional word boundaries would become less useful. There is no reason why you could not search for structure in terms of groups of letters, but the advantage of searching for patterns anew each time would decrease as the morphology/phonotactics became less productive. Chinese is a particularly interesting case to study because you can go beneath "word" boundaries and find productive morphological structure while still dealing with a relatively small number of "letters". Listing "all the examples", at any time, in my implementation corresponds to listing a corpus. I would never attempt to "harvest ... all the rules". It would correspond broadly in this model to listing all the sentences you could possibly say in a language. I don't work in academia so there has been little incentive to publish, but I did present a paper at a North American ACL some years ago: Freeman R. J., Example-based Complexity--Syntax and Semantics as the Production of Ad-hoc Arrangements of Examples, Proceedings of the ANLP/NAACL 2000 Workshop on Syntactic and Semantic Complexity in Natural Language Processing Systems, pp. 47-50. (http://acl.ldc.upenn.edu/W/W00/W00-0108.pdf) This paper was deliberately vague on the details of the technical implementation, but it presented the core complexity ideas. Your book on "Cycles in Language" sounds interesting. How many formalisms have you counted? -Rob On Wed, Jul 9, 2008 at 10:02 PM, A. Katz wrote: > Okay, Rob. So you would like to stick with your topic. > > Do you have a formalism to deal with the more-rules-than-examples > scenario? > > How do we count the examples and the rules? What are the more specific > implications to any particular language? Have you already (or are you in > the process of) applying this outlook to a single natural language in > order to harvest all the examples and all the rules? > > If you have written any papers on this topic, would you care to share them > with us? > > I am currently in the process of writing a book entitled CYCLES IN > LANGUAGE. The topic is language change/evolution, and the main observation > is that as much as language changes, it stays remarkably the same. > > In some of the beginning chapters, I and my co-author June Sun, > discuss different formalisms for accounting for grammar, and we > specifically discuss the concept of functional equivalence. We would be > happy to include your outlook on more-examples-than-rules, if there are > papers to cite. > > > Best, > > --Aya From paul at benjamins.com Thu Jul 10 17:11:47 2008 From: paul at benjamins.com (Paul Peranteau) Date: Thu, 10 Jul 2008 13:11:47 -0400 Subject: New Benjamins book - Adolphs: Corpus and Context Message-ID: Corpus and Context Investigating pragmatic functions in spoken discourse Svenja Adolphs University of Nottingham Studies in Corpus Linguistics 30 2008. xi, 151 pp. Hardbound 978 90 272 2304 3 / EUR 99.00 / USD 149.00 Corpus and Context explores the relationship between corpus linguistics and pragmatics by discussing possible frameworks for analysing utterance function on the basis of spoken corpora. The book articulates the challenges and opportunities associated with a change of focus in corpus research, from lexical to functional units, from concordance lines to extended stretches of discourse, and from the purely textual to multi-modal analysis of spoken corpus data. Drawing on a number of spoken corpora including the five million word Cambridge and Nottingham Corpus of Discourse in English (CANCODE, funded by CUP (c)), a specific speech act function is being explored using different approaches and different levels of analysis. This involves a close analysis of contextual variables in relation to lexico-grammatical and discoursal patterns that emerge from the corpus data, as well as a wider discussion of the role of context in spoken corpus research. -------------------------------------------------------------------------------- Table of contents Acknowledgements ix?x Tables and figures xi Chapter 1. Introduction 1?17 Chapter 2. Spoken discourse and corpus analysis 19?42 Chapter 3. Pragmatic functions, conventionalised speech acts expressions and corpus evidence 43?72 Chapter 4. Pragmatic functions in context 73?88 Chapter 5. Exploring pragmatic functions in discourse: The speech act episode 89?116 Chapter 6. Pragmatic functions beyond the text 117?130 Chapter 7. Concluding remarks 131?136 Appendix: Transcription conventions for the CANCODE data used in this book 137?138 References 139?148 Index 149?151 Paul Peranteau (paul at benjamins.com) General Manager John Benjamins Publishing Company 763 N. 24th St. Philadelphia PA 19130 Phone: 215 769-3444 Fax: 215 769-3446 John Benjamins Publishing Co. website: http://www.benjamins.com From paul at benjamins.com Thu Jul 10 17:13:49 2008 From: paul at benjamins.com (Paul Peranteau) Date: Thu, 10 Jul 2008 13:13:49 -0400 Subject: New Benjamins book - Kurzon & Adler: Adpositions Message-ID: Adpositions Pragmatic, semantic and syntactic perspectives Edited by Dennis Kurzon and Silvia Adler University of Haifa Typological Studies in Language 74 2008. viii, 307 pp. Hardbound 978 90 272 2986 1 / EUR 110.00 / USD 165.00 This book is a collection of articles which deal with adpositions in a variety of languages and from a number of perspectives. Not only does the book cover what is traditionally treated in studies from a European and Semitic orientation ? prepositions, but it presents studies on postpositions, too. The main languages dealt with in the collection are English, French and Hebrew, but there are articles devoted to other languages including Korean, Turkic languages, Armenian, Russian and Ukrainian. Adpositions are treated by some authors from a semantic perspective, by others as syntactic units, and a third group of authors distinguishes adpositions from the point of view of their pragmatic function. This work is of interest to students and researchers in theoretical and applied linguistics, as well as to those who have a special interest in any of the languages treated. -------------------------------------------------------------------------------- Table of contents Introduction Dennis Kurzon and Silvia Adler List of contributors French compound prepositions, prepositional locutions and prepositional phrases in the scope of the absolute use Silvia Adler "Over the hills and far away" or "far away over the hills": English place adverb phrases and place prepositional phrases in tandem? David J. Allerton Structures with omitted prepositions: Semantic and pragmatic motivations Esther Borochovsky Bar-Aba A closer look at the Hebrew Construct and free locative PPs: The analysis of mi-locatives Irena Botwinik-Rotem Pragmatics of prepositions: A study of the French connectives pour le coup and du coup Pierre Cadiot and Franck Lebas Particles and postpositions in Korean Injoo Choi-Jonin French prepositions ? and de in infinitival complements: A pragma-semantic analysis Lidia Fraczak Prepositional wars: When ideology defines preposition Julia G. Krivoruchko "Ago" and its grammatical status in English and in other languages Dennis Kurzon Case marking of Turkic adpositional objects Alan Libert The logic of addition: Changes in the meaning of the Hebrew preposition im ("with"). Tamar Sovran A monosemic view of polysemic prepositions Yishai Tobin The development of Classical Armenian prepositions and its implications for universals of language change Christopher Wilhelm Paul Peranteau (paul at benjamins.com) General Manager John Benjamins Publishing Company 763 N. 24th St. Philadelphia PA 19130 Phone: 215 769-3444 Fax: 215 769-3446 John Benjamins Publishing Co. website: http://www.benjamins.com From paul at benjamins.com Thu Jul 10 17:16:16 2008 From: paul at benjamins.com (Paul Peranteau) Date: Thu, 10 Jul 2008 13:16:16 -0400 Subject: New Benjamins book - Stolz et al.: Split Possession Message-ID: Split Possession An areal-linguistic study of the alienability correlation and related phenomena in the languages of Europe Thomas Stolz, Sonja Kettler, Cornelia Stroh and Aina Urdze University of Bremen Studies in Language Companion Series 101 2008. x, 546 pp. Hardbound 978 90 272 0568 1 / EUR 130.00 / USD 195.00 This book is a functional-typological study of possession splits in European languages. It shows that genetically and structurally diverse languages such as Icelandic, Welsh, and Maltese display possessive systems which are sensitive to semantically based distinctions reminiscent of the alienability correlation. These distinctions are grammatically relevant in many European languages because they require dedicated constructions. What makes these split possessive systems interesting for the linguist is the interaction of semantic criteria with pragmatics and syntax. Neutralisation of distinctions occurs under focus. The same happens if one of the constituents of a possessive construction is syntactically heavy. These effects can be observed in the majority of the 50 sample languages. Possessive splits are strong in those languages which are outside the Standard Average European group. The bulk of the European languages do not behave much differently from those non-European languages for which possession splits are reported. The book reveals interesting new facts about European languages and possession to typologists, universals researchers, and areal linguists. -------------------------------------------------------------------------------- Table of contents Preface vii?viii List of abbreviations ix?x Part A: What needs to be known beforehand Chapter 1. Introduction 3?9 Chapter 2. Prerequisites 11?28 Chapter 3. Split possession 29?40 Part B: Tour d'Europe Chapter 4. Grammatical possession splits 43?315 Chapter 5. Further evidence of possession splits in Europe 317?465 Part C: On European misfits and their commonalities Chapter 6. Results 469?516 Notes 517?519 Sources 521?524 References 525?533 Additional background literature 535?538 Index of languages 539?540 Index of authors 541?544 Index of subjects 545?546 Paul Peranteau (paul at benjamins.com) General Manager John Benjamins Publishing Company 763 N. 24th St. Philadelphia PA 19130 Phone: 215 769-3444 Fax: 215 769-3446 John Benjamins Publishing Co. website: http://www.benjamins.com From egclaw at inet.polyu.edu.hk Fri Jul 11 06:42:41 2008 From: egclaw at inet.polyu.edu.hk (Catherine C Law [ENGL]) Date: Fri, 11 Jul 2008 14:42:41 +0800 Subject: Hong Kong short course on intonation in English (1 - 4 Sep 2008) Message-ID: ======= Workshop ======= The goal of this short course is to introduce you to two computer programs, PRAAT and CORPUS TOOL, and to Intonation in the Grammar of English (which constitutes the center of gravity for the course) by Michael Halliday and William Greaves. Details of the new book Intonation in the Grammar of English can be found under: http://www.equinoxpub.com/books/showbook.asp?bkid=7 PRAAT is an excellent tool which can be used in virtually any computer for very sophisticated phonetic analysis. The instructor will make extensive use of PRAAT to introduce the patterns of English intonation. CORPUS TOOL is a new and more powerful successor to Mick O?Donnell?s SYSTEMIC CODER. The instructor will demonstrate and use only one of the many features in the program: the creation and editing of system networks. It is far easier to do this with CORPUS TOOL than with editing programs such as Word or graphics programs such as Paint. ======= Instructor ======= Bill Greaves is a Senior Scholar at York University in Toronto, where he is a member of the Glendon College English Department and the Graduate Programme in English. In collaboration with Michael Halliday he has been working on Intonation in English for about a decade. During that time he has taught courses in intonation ranging from a few days to six months in a number of countries: Argentina, Australia, Canada, China, Egypt, England, Finland, India, Israel, and Japan. ======== Programme ======== The course includes four lecture hours per day, plus "hands on" work each day in a language laboratory. Participants in the lecture will be encouraged to form "pods" around those with laptops (this has proved to be very effective). Date: 1 ? 4 September 2008 (Mon - Thu) Venue: The Hong Kong Polytechnic University, Hunghom, Kowloon, Hong Kong ======== Registration ======== Participants can register online at the website of the workshop. The maximum number of participants is 60 and place is allocated on a first-come-first-served basis. =========== Registration Fee =========== HK$400 (includes a copy of Intonation in the Grammar of English). Payment can be made online at the website of the workshop. ============= Workshop Website ============= http://www.engl.polyu.edu.hk/events/intonworkshop2008 From phonosemantics at earthlink.net Sat Jul 26 15:31:57 2008 From: phonosemantics at earthlink.net (jess tauber) Date: Sat, 26 Jul 2008 10:31:57 -0500 Subject: social obligation in 'definiteness' in differential case marking systems? Message-ID: I'm hoping folks on the list can give me tips on languages they know of which mark social obligation as part of the semantics of definiteness in case marking. I've found such a system hidden within Yahgan. When a particular suffix -nchi is added to a nominal there is a very strong implication of such connections. For instance in -nchikaia, where this suffix is followed by the dative form, other participants, and the action of the verb, conspire to create a benefactive, substitutive sense- as a recognized, trusted agent other who 'owes' such action to the marked entities, acting on their behalf, be they other family members, deities, religious sages, kings etc. The agent gets his/her power from the marked entity, perhaps even the particular marching orders. Forms with dative but without the -nchi- ambivalently imply positive, neutral, or negative outcome for the marked NP, and no such relationship. So, does anyone know of other languages that have something similar? Thanks. Jess Tauber phonosemantics at earthlink.net From comrie at eva.mpg.de Tue Jul 29 08:03:46 2008 From: comrie at eva.mpg.de (Bernard Comrie) Date: Tue, 29 Jul 2008 10:03:46 +0200 Subject: Max Planck Institute for Evolutionary Anthropology: Announcement of Vacancy Message-ID: [From Bernard Comrie ] Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany Announcement of Vacancy The Department of Linguistics at the Max Planck Institute in Leipzig has a vacancy for a Senior Researcher in the area of phonetics. The successful candidate will be expected to develop a research program in phonetics in relation to the department's core areas of language history and prehistory, linguistic typology, and description of little studied and endangered languages, and will also be responsible for the scientific direction of the department's phonetics laboratory. The five-year non-renewable position is available from 01 November 2008; a later starting date may be negotiable. Prerequisites for an application are a PhD and publications in phonetics. The salary is according to the German public service pay scale (TV?D). The Max Planck Society is concerned to employ more disabled people; applications from disabled people are explicitly sought. The Max Planck Society wishes to increase the proportion of women in areas in which they are underrepresented; women are therefore explicitly encouraged to apply. Applicants are requested to send their complete dossier (including curriculum vitae, description of research interests, names and contact details of two referees, and a piece of written work on one of the relevant topics) no later than 30 September 2008 to: Max Planck Institute for Evolutionary Anthropology Personnel Department Prof. Dr. Bernard Comrie Code word: Scientist Dept Linguistics Deutscher Platz 6 D-04103 Leipzig, Germany Please address questions to Bernard Comrie . Information on the institute is available at http://www.eva.mpg.de/.