From clayke at DELPHI.COM Sun Aug 1 05:06:35 1999 From: clayke at DELPHI.COM (Clayton Gillespie) Date: Sat, 31 Jul 1999 22:06:35 -0700 Subject: emergence Message-ID: Heya FunkNeteers, I hope you'll find enough usefulness in this comment from the math folk to forgive the introduction of vocabularies rarely seen on this list: Introduction -------------- Signals can be viewed as simple or complex by virtue of how compressible they are. High compressibility corresponds to simple signals. Compression is a process of looking for patterns and replacing them with shorter pointers to constructors for those patterns. It might be useful to think of linguistic interpretation and generation in these terms: compression into salient features and ultimately into concepts, construction of symbols/representations and ultimately of speech acts. So far a very simple analogy, but formal compression[1] has some very useful properties. Chiefly, the compression algorithm self-modifies, so if you accept the premise of this analogy it might be said to learn and thus crudely model speech acquisition. "Unexpected" Emergence ------------------------------ In this model, the compression process, the process of acquiring the specific rules that best shorten the given signal, builds from highly repetitious localized phenomena toward more widespread "rules of thumb". As it progresses toward larger and larger frames of data comparison, the algorithm may recast earlier rules. This recasting could be considered a formal equivalent to the "unexpected" kind of emergence. Emergence is very much associated with the vocabulary of complexity theory, and there is some additional value to be gained by exploring that territory. Besides having the characteristic of self-modification, formal compression is also quantifiable. This quantification is a direct measure of complexity in the formal sense; and if Church's Thesis is correct, then this is a universal measure of complexity as well. (Since the resulting measure of compressibility is uniformly comparable to other such measures, we need not worry about the particulars of the kernel algorithm at this time. Note that measuring compressibility is distinct from determining a compression ratio.) Since the analogy of compression introduces the whole of complexity theory we can now alter our analogy. We may view the steps of the generated compression rules as a system of constraints (e.g., the constraints on what constitutes a grammatically acceptable utterance). Going backwards, we may also consider our signal to be a larger constraint network and our process of compression a process of propagating the constraints toward a normalized form. Constraint networks are convenient because they lend themselves nicely to topological descriptions. We may use textural terms to describe small subsets of constraints: they may be called "compatible" if their a/d-ratio is high; we may say they are "fluffy" if their b-values tend to be high; and we may say they are "dense" (not opposed to "fluffy") if their g-values (a.k.a. secondary b-values) are low. As we move to larger subsets of constraints we may find that they are well-described by nice aggregate topological terms such as "ropey", "looped", "lobed", "cusped", "foamy", etc.[2] Our terms are now scoped by the size of the frame we are viewing. But having gone backwards to acquire this vocabulary descriptive of the behavior-structure we are interested in, it becomes necessary to try to reestablish the "emergence" component of the analogy. Immediately we find this highly problematic. Every step of constraint propagation renders a new network representation that is isomorphic to the original, the step has no scoping frame size, and no step shows radical differences in formulation (or perhaps we might say all of them do). Consequently, there is no longer a dramatic interpretive shift like we saw before. The closest we may come is that we may observe that the rate at which the size of the network representation is reduced may increase sharply at certain points in the process. I now emphasize the phrase "at certain points" because I think it begins to reconcile the compression and constraint network analogies that seem to represent the "event" ("unexpected") and "state" notions of emergence. The two formulations seem to indicate that the perception that something emerges is dependent on three things: the structure of the signal/system itself, the order in which operations are applied to the signal/system, and the threshold of unexpectedness being looked for. So the "unexpected" kind of emergence is a perceptual event, and insofar as perception is always perception of something it is dependent on the properties of the thing-in-itself and the perceptual/generative process. We are limited in what we can test about perceptual events. Because we are talking about a learning structure, experimentation amounts to alteration of the structure either directly or via alteration of the signal on which it operates. (For example, if we interrogate the behavior of an interviewee to determine whether s/he has a linguistic concept of "subject", we may, by virtue of the clues we give during the interrogation, lead him/her to create such a concept in order to respond to our questions.) Passive observations can only effectively explore what is common. So it is very hard to make statements about the limits of possibility (and therefore about the nature of any regularity we find) based on behavioral data alone. But the fact that it is difficult to make clear statements about the limits of possibility doesn't preclude useful study of perceptual events. I suspect that at some level emergence events are marked as pleasurable: we seem to gain more satisfaction from epiphany than from gradual accumulation of knowledge (which, I think, explains the fascination people have with this subject). So as long as we aren't trying to predict emergence outside of what might be considered well-explored paths we may still find this kind of emergence a useful concept. Emergent State ------------------ We may, instead of looking at momentarily high rates of compression (the "emergence event"), want to look at the total compression at a certain point in the process (the supposedly emergent state) and try to predict if the then current set of compression rules will largely persist and why. Again it is necessary to scope the question before attempting to answer. A single connection is sometimes enough to change the topological class of a network, so any statements about structure implying function must be proximally bounded. However, this kind of proximity is not necessarily spatial in the usual sense. Constraint networks are literally N-dimensional, so the operative proximity may not correspond to what we consider R-space; and Liz's observations point out that not only is this possible but it actually seems to be the case for some brain functions. However, since neurons don't physically stretch from ear to ear, the order of any rhisomic qualities of our learning/learned networks is strongly bounded by the order of neuronal (R-space) proximity. (Which, perhaps, went without saying; but when introducing new vocabularies sometimes it's a good idea to show that you're not a complete nut.) But assuming that we have scoped our question, what can we say about the persistence of emergent states? Unfortunately, again not much. Either we are considering the fully normalized network within our scope, at which point the structure is the same as the implication, or else we are making a scoping error. Another way of saying that is that all constraints are potentially dependent on every other constraint, and they cannot be partitioned off until one can prove that the constraints in other putative partitions have no local constraining effect. This cannot be proved without normalizing the network up to the boundaries of the proposed partition. By examining the interaction of two such partitions we change our scope and so resolve any sense of emergence we previously had. So in speaking this way we are always talking about a structure that will persist unchanging, and the dynamic concept of emergence is completely lost - except as a historical or evolutionary note (kind of like when you carry the 1 in addition). So that's my little story about emergence. I'd be curious to know if people find it interesting or merely bizarre. Thanks for your time, - Clayton Gillespie E-lectra [1] By formal compression I mean what is usually addressed in algorithmic complexity theory, about which Kolmogorov, Chaitin, and Solomonoff have written extensively. The primary source I'm using is: Li, Ming & Vitányi, Paul. 1993. "An Introduction to Kolmogorov Complexity and its Applications" New York: Springer-Verlag. [2] Of course, since constraint networks are N-dimensional we are likely to run out of useful topological vocabulary anyway, but at least this will get us a little further. From CCarter at BLACKWELLPUBLISHERS.CO.UK Thu Aug 12 10:04:46 1999 From: CCarter at BLACKWELLPUBLISHERS.CO.UK (Carter Clare) Date: Thu, 12 Aug 1999 11:04:46 +0100 Subject: Recent TOC for Syntax and Linguistics Abstract Online Message-ID: "The new journal SYNTAX is a welcome arrival, with outstanding editors and fine prospects. I look forward with much anticipation to the forthcoming issues." - Noam Chomsky, MIT SYNTAX aims to unite related but often disjointedly represented areas of syntactic inquiry together in one publication. Within a single forum SYNTAX will accommodate both the explosive growth and increased specialization in the field of syntax. Contents of the August 1999 issue: Multiple Case Checking - Susana Bejar and Diane Massam Gambling on UG: The Application of Monte Carlo Computer Simulation to the Analysis of L2 Reflexes - Katrien N. Christie and Phillip Christie Two Types of Impersonal Sentences in Spanish: Locative and Dative Subjects - Olga Fernándex-Soriano Events and Economy of Coordination - Ljiljana Progovac Edited by Samuel Epstein and Suzanne Flynn ISSN: 1368-0005, 3 issues a year, Volume 2, 1999 SAMPLE COPIES AVAILABLE For subscription details or to order a sample copy of SYNTAX please contact Clare Carter (quoting reference 89BB466) at: ccarter at blackwellpublishers.co.uk ALSO AVAILABLE - LINGUISTICS ABSTRACT ONLINE LINGUISTICS ABSTRACTS ONLINE is designed to revolutionize research and teaching by giving immediate access via the World Wide Web to more than 15,000 abstracts from nearly 300 linguistics journals published since 1985. 30-Day Free Library Trial Available Now! We are currently offering libraries a free 30-day trial link to the LINGUISTICS ABSTRACTS ONLINE website. During this period we will provide full access to your users. To register for the trial please contact: e-help at blackwellpublishers.co.uk From Salinas17 at AOL.COM Fri Aug 13 00:48:09 1999 From: Salinas17 at AOL.COM (Steve Long) Date: Thu, 12 Aug 1999 20:48:09 EDT Subject: Language and emergence "mechanisms" Message-ID: To FUNKNET: I am an amateur at this and I hope you will all forgive my ignorance in this post. I am sending this to hopefully understand what I am misunderstanding. tgivon at OREGON.UOREGON.EDU wrote: <> Regarding both questions: Isn't there another way to approach both questions. Borrowing some of the "functionalism" from the science of biological evolution - 1. There must be per se "neuro-cognitive" consequences when "automated skills" (specifically, language skills) emerge. But, if we go by evolution theory, these underlying mechanisms should be to some degree incidental to the function of the "skills" being observed. The governing principle would be the survival value of language itself, and the mechanisms would be shaped by the demands of effective language functions, including the physical, biological and human cultural demands. By analogy, flight is a highly skilled behavior that can also be described as "automated" but not limited to one species. One of the "mechanisms" of flight is often wings. Be they insect wings, feathered wings, bat wing membranes or rigid aluminum airplane wings, all of them conform to the demands of the physical laws regarding flight - these dictate what mechanisms must "do" and therefore the possible structures they can take. Functionally effective language, just like functionally effective flight, must conform to the physical, biological, etc. dictates it finds in this world. E.g., if we define language as having anything to do with communication between individuals, it must in some way overcome the needs of human sensory organs - it must be audible, visuals, tactile, etc. On the "neural-cognitive" level - the internal physical biochemical mechanisms - the same rules would apply. What must a brain (and its relationship to the whole functioning of the human body) accomplish - what MUST it be structured like to do "language" sucessfully. One problem here is that although we can describe flight with some particularity (along with its "survival value") and do it with a high degree of confidence, we have trouble with "language." We can say some things I think with certainty, however. A primary function of language is that it is interpersonal. Which means that it must be predictable to some degree over time. I must have some certainty when you speak a word that you are referring to the same thing you referred to last time. Or my speech is unpredictable, incoherent, and does not function as language. This means that routine, rout, automation is also to some degree a necessary aspect of language - and an environmental demand on any biological mechanisms of language. Just like wings must conform to certain physical laws, a brain must satisfy certain physical laws to be able to perform the "predictability" function of language. It must be able to repeat whatever it does consistently. Simple enough. Computers do that. But just as in flight, the specific manner (insect wings, bird wings, bat wings, 747 wings) is somewhat incidental. Divergent structures can serve the same function. And it's also important to realize that these "mechanisms" did not necessarily arise to serve the function being observed (e.g., feathers and forelimbs did not evolve for flight, neither did aluminum technology.) So the mechanism is not necessarily determinative, the outcome is. And language is highly functional. If it (in an alternative universe) were managed by the structures of our feet (lots of them, let's say and very intricate) instead of our brains, its survival value as a trait would still be a strong reason for it to emerge. Also, to the extent that we are speaking of "grammar" as the morphology of language - subject, predicate; noun, verb; tense, etc., - aren't these the products first of all of the world we live in? Isn't that why language is shaped like it is, because of the reality it reflects? Isn't there a survival value in accurately discriminating between objects and actions, past and present, etc., when speaking to another? Doesn't the world we live in ultimately determine the basic structure of our language, in order to acheive effective predictable common reference? How would effective human language be different if one did not posit any inherited language trait? Wouldn't it be pretty much the same, because that is what communication regarding the real world demands? The reason that airplane wings look like bird wings when soaring is because they both serve the same function and conform to the same environmental demands. But bird wings are an inherited (genetic) trait, airplane wings are not. Shouldn't we expect the same convergence with language? That the demands of communication will shape language? Rather than expecting that some fortuitous internal structure determined what language is like. 2. With regard to "emergence" as well as I understand it, the same first basic questions would be asked. Any mechanism of genetic or individual emergence can be assumed to have functional reasons for being. The best analogy I can think of at the moment is the biological mechanisms of childbirth. What is the functional advantage of the not giving birth to fully adult progeny? What is the advantage of the pupae stage in some organisms? All of these are as connected to the survival needs of the parent as to the progeny. That is why I would look for the demands that are put on any effective emergence mechanism by the needs of the group. Are there individual or interpersonal advantages in postponing language emergence? Is language merely a by-product of early physical dependency in humans (a biological trait that fosters sociality in animals that may not otherwised be pre-wired to be social?) What has more survival value? A prewired full-blown language-using human individual from the time of birth (like some instanly communicationing insects?) Or one that goes through other, interpersonal processes before language emerges? Is language impossible without those interpersonal processes? And of course does language actually "emerge" or is it merely awaiting proper physiological development? The mechanisms of emergence might become more identifiable if these functions are considered. (Even if the experiment - the controlled removal from birth of all interpersonal aspects of language from a human subject - is one we cannot ethically perform. In other words, if a human could only communicate with himself from birth, without any advantage in communicating with another human, would "language" emerge? And what would it be like?) Hope this makes some sense and thanks, Steve Long From Sally.Rice at UALBERTA.CA Sun Aug 22 21:40:14 1999 From: Sally.Rice at UALBERTA.CA (Sally Rice) Date: Sun, 22 Aug 1999 15:40:14 -0600 Subject: Chair position, University of Alberta Message-ID: Chair, Department of Linguistics University of Alberta Edmonton, Canada Applications and nominations are invited for the position of Chair of the Department of Linguistics at the University of Alberta. This tenured appointment will be made at the rank of senior Associate or full Professor, effective 1 July 2000. The floor of the salary scale for the rank of Professor for the 1999/2000 academic year is $65,044. Candidates should have a distinguished record of scholarship and professional achievement in both experimental and theoretical linguistics, and a research specialization compatible with one of the continuing research strengths of the department. The Faculty of Arts at the University of Alberta is engaged in an extensive process of renewal, and is committed to ensuring that the substantial number of hirings projected over the next several years will ensure for the future the lively and productive intellectual environment on which the Faculty prides itself. The Department of Linguistics has a strong commitment to empirical and experimental approaches to linguistic research. Department members are engaged in ongoing research projects, many grant-funded, in experimental phonetics, language acquisition, discourse processing, and the study of the phonological, morphological, and semantic aspects of the mental lexicon. A Chair is sought who has a philosophical commitment to experimental research in the service of theory, and who will act as a bridge builder with allied fields in the cognitive science domain. The Department offers both graduate (PhD and MSc) and undergraduate degrees, and values its reputation for excellence in teaching and graduate training. It provides an environment of leading-edge research and innovative teaching/learning at both the undergraduate and graduate levels. The University of Alberta is committed to the principle of equity in employment. As an employer we welcome diversity in the workplace and encourage applications from all qualified women and men, including Aboriginal persons, persons with disabilities and members of visible minorities. Please send nominations or applications (including CVs and the names of three referees) by 1 November 1999 to: Kenneth Norrie Dean of Arts University of Alberta Edmonton, Alberta T6G 2E5 Canada From clayke at DELPHI.COM Sun Aug 1 05:06:35 1999 From: clayke at DELPHI.COM (Clayton Gillespie) Date: Sat, 31 Jul 1999 22:06:35 -0700 Subject: emergence Message-ID: Heya FunkNeteers, I hope you'll find enough usefulness in this comment from the math folk to forgive the introduction of vocabularies rarely seen on this list: Introduction -------------- Signals can be viewed as simple or complex by virtue of how compressible they are. High compressibility corresponds to simple signals. Compression is a process of looking for patterns and replacing them with shorter pointers to constructors for those patterns. It might be useful to think of linguistic interpretation and generation in these terms: compression into salient features and ultimately into concepts, construction of symbols/representations and ultimately of speech acts. So far a very simple analogy, but formal compression[1] has some very useful properties. Chiefly, the compression algorithm self-modifies, so if you accept the premise of this analogy it might be said to learn and thus crudely model speech acquisition. "Unexpected" Emergence ------------------------------ In this model, the compression process, the process of acquiring the specific rules that best shorten the given signal, builds from highly repetitious localized phenomena toward more widespread "rules of thumb". As it progresses toward larger and larger frames of data comparison, the algorithm may recast earlier rules. This recasting could be considered a formal equivalent to the "unexpected" kind of emergence. Emergence is very much associated with the vocabulary of complexity theory, and there is some additional value to be gained by exploring that territory. Besides having the characteristic of self-modification, formal compression is also quantifiable. This quantification is a direct measure of complexity in the formal sense; and if Church's Thesis is correct, then this is a universal measure of complexity as well. (Since the resulting measure of compressibility is uniformly comparable to other such measures, we need not worry about the particulars of the kernel algorithm at this time. Note that measuring compressibility is distinct from determining a compression ratio.) Since the analogy of compression introduces the whole of complexity theory we can now alter our analogy. We may view the steps of the generated compression rules as a system of constraints (e.g., the constraints on what constitutes a grammatically acceptable utterance). Going backwards, we may also consider our signal to be a larger constraint network and our process of compression a process of propagating the constraints toward a normalized form. Constraint networks are convenient because they lend themselves nicely to topological descriptions. We may use textural terms to describe small subsets of constraints: they may be called "compatible" if their a/d-ratio is high; we may say they are "fluffy" if their b-values tend to be high; and we may say they are "dense" (not opposed to "fluffy") if their g-values (a.k.a. secondary b-values) are low. As we move to larger subsets of constraints we may find that they are well-described by nice aggregate topological terms such as "ropey", "looped", "lobed", "cusped", "foamy", etc.[2] Our terms are now scoped by the size of the frame we are viewing. But having gone backwards to acquire this vocabulary descriptive of the behavior-structure we are interested in, it becomes necessary to try to reestablish the "emergence" component of the analogy. Immediately we find this highly problematic. Every step of constraint propagation renders a new network representation that is isomorphic to the original, the step has no scoping frame size, and no step shows radical differences in formulation (or perhaps we might say all of them do). Consequently, there is no longer a dramatic interpretive shift like we saw before. The closest we may come is that we may observe that the rate at which the size of the network representation is reduced may increase sharply at certain points in the process. I now emphasize the phrase "at certain points" because I think it begins to reconcile the compression and constraint network analogies that seem to represent the "event" ("unexpected") and "state" notions of emergence. The two formulations seem to indicate that the perception that something emerges is dependent on three things: the structure of the signal/system itself, the order in which operations are applied to the signal/system, and the threshold of unexpectedness being looked for. So the "unexpected" kind of emergence is a perceptual event, and insofar as perception is always perception of something it is dependent on the properties of the thing-in-itself and the perceptual/generative process. We are limited in what we can test about perceptual events. Because we are talking about a learning structure, experimentation amounts to alteration of the structure either directly or via alteration of the signal on which it operates. (For example, if we interrogate the behavior of an interviewee to determine whether s/he has a linguistic concept of "subject", we may, by virtue of the clues we give during the interrogation, lead him/her to create such a concept in order to respond to our questions.) Passive observations can only effectively explore what is common. So it is very hard to make statements about the limits of possibility (and therefore about the nature of any regularity we find) based on behavioral data alone. But the fact that it is difficult to make clear statements about the limits of possibility doesn't preclude useful study of perceptual events. I suspect that at some level emergence events are marked as pleasurable: we seem to gain more satisfaction from epiphany than from gradual accumulation of knowledge (which, I think, explains the fascination people have with this subject). So as long as we aren't trying to predict emergence outside of what might be considered well-explored paths we may still find this kind of emergence a useful concept. Emergent State ------------------ We may, instead of looking at momentarily high rates of compression (the "emergence event"), want to look at the total compression at a certain point in the process (the supposedly emergent state) and try to predict if the then current set of compression rules will largely persist and why. Again it is necessary to scope the question before attempting to answer. A single connection is sometimes enough to change the topological class of a network, so any statements about structure implying function must be proximally bounded. However, this kind of proximity is not necessarily spatial in the usual sense. Constraint networks are literally N-dimensional, so the operative proximity may not correspond to what we consider R-space; and Liz's observations point out that not only is this possible but it actually seems to be the case for some brain functions. However, since neurons don't physically stretch from ear to ear, the order of any rhisomic qualities of our learning/learned networks is strongly bounded by the order of neuronal (R-space) proximity. (Which, perhaps, went without saying; but when introducing new vocabularies sometimes it's a good idea to show that you're not a complete nut.) But assuming that we have scoped our question, what can we say about the persistence of emergent states? Unfortunately, again not much. Either we are considering the fully normalized network within our scope, at which point the structure is the same as the implication, or else we are making a scoping error. Another way of saying that is that all constraints are potentially dependent on every other constraint, and they cannot be partitioned off until one can prove that the constraints in other putative partitions have no local constraining effect. This cannot be proved without normalizing the network up to the boundaries of the proposed partition. By examining the interaction of two such partitions we change our scope and so resolve any sense of emergence we previously had. So in speaking this way we are always talking about a structure that will persist unchanging, and the dynamic concept of emergence is completely lost - except as a historical or evolutionary note (kind of like when you carry the 1 in addition). So that's my little story about emergence. I'd be curious to know if people find it interesting or merely bizarre. Thanks for your time, - Clayton Gillespie E-lectra [1] By formal compression I mean what is usually addressed in algorithmic complexity theory, about which Kolmogorov, Chaitin, and Solomonoff have written extensively. The primary source I'm using is: Li, Ming & Vit?nyi, Paul. 1993. "An Introduction to Kolmogorov Complexity and its Applications" New York: Springer-Verlag. [2] Of course, since constraint networks are N-dimensional we are likely to run out of useful topological vocabulary anyway, but at least this will get us a little further. From CCarter at BLACKWELLPUBLISHERS.CO.UK Thu Aug 12 10:04:46 1999 From: CCarter at BLACKWELLPUBLISHERS.CO.UK (Carter Clare) Date: Thu, 12 Aug 1999 11:04:46 +0100 Subject: Recent TOC for Syntax and Linguistics Abstract Online Message-ID: "The new journal SYNTAX is a welcome arrival, with outstanding editors and fine prospects. I look forward with much anticipation to the forthcoming issues." - Noam Chomsky, MIT SYNTAX aims to unite related but often disjointedly represented areas of syntactic inquiry together in one publication. Within a single forum SYNTAX will accommodate both the explosive growth and increased specialization in the field of syntax. Contents of the August 1999 issue: Multiple Case Checking - Susana Bejar and Diane Massam Gambling on UG: The Application of Monte Carlo Computer Simulation to the Analysis of L2 Reflexes - Katrien N. Christie and Phillip Christie Two Types of Impersonal Sentences in Spanish: Locative and Dative Subjects - Olga Fern?ndex-Soriano Events and Economy of Coordination - Ljiljana Progovac Edited by Samuel Epstein and Suzanne Flynn ISSN: 1368-0005, 3 issues a year, Volume 2, 1999 SAMPLE COPIES AVAILABLE For subscription details or to order a sample copy of SYNTAX please contact Clare Carter (quoting reference 89BB466) at: ccarter at blackwellpublishers.co.uk ALSO AVAILABLE - LINGUISTICS ABSTRACT ONLINE LINGUISTICS ABSTRACTS ONLINE is designed to revolutionize research and teaching by giving immediate access via the World Wide Web to more than 15,000 abstracts from nearly 300 linguistics journals published since 1985. 30-Day Free Library Trial Available Now! We are currently offering libraries a free 30-day trial link to the LINGUISTICS ABSTRACTS ONLINE website. During this period we will provide full access to your users. To register for the trial please contact: e-help at blackwellpublishers.co.uk From Salinas17 at AOL.COM Fri Aug 13 00:48:09 1999 From: Salinas17 at AOL.COM (Steve Long) Date: Thu, 12 Aug 1999 20:48:09 EDT Subject: Language and emergence "mechanisms" Message-ID: To FUNKNET: I am an amateur at this and I hope you will all forgive my ignorance in this post. I am sending this to hopefully understand what I am misunderstanding. tgivon at OREGON.UOREGON.EDU wrote: <> Regarding both questions: Isn't there another way to approach both questions. Borrowing some of the "functionalism" from the science of biological evolution - 1. There must be per se "neuro-cognitive" consequences when "automated skills" (specifically, language skills) emerge. But, if we go by evolution theory, these underlying mechanisms should be to some degree incidental to the function of the "skills" being observed. The governing principle would be the survival value of language itself, and the mechanisms would be shaped by the demands of effective language functions, including the physical, biological and human cultural demands. By analogy, flight is a highly skilled behavior that can also be described as "automated" but not limited to one species. One of the "mechanisms" of flight is often wings. Be they insect wings, feathered wings, bat wing membranes or rigid aluminum airplane wings, all of them conform to the demands of the physical laws regarding flight - these dictate what mechanisms must "do" and therefore the possible structures they can take. Functionally effective language, just like functionally effective flight, must conform to the physical, biological, etc. dictates it finds in this world. E.g., if we define language as having anything to do with communication between individuals, it must in some way overcome the needs of human sensory organs - it must be audible, visuals, tactile, etc. On the "neural-cognitive" level - the internal physical biochemical mechanisms - the same rules would apply. What must a brain (and its relationship to the whole functioning of the human body) accomplish - what MUST it be structured like to do "language" sucessfully. One problem here is that although we can describe flight with some particularity (along with its "survival value") and do it with a high degree of confidence, we have trouble with "language." We can say some things I think with certainty, however. A primary function of language is that it is interpersonal. Which means that it must be predictable to some degree over time. I must have some certainty when you speak a word that you are referring to the same thing you referred to last time. Or my speech is unpredictable, incoherent, and does not function as language. This means that routine, rout, automation is also to some degree a necessary aspect of language - and an environmental demand on any biological mechanisms of language. Just like wings must conform to certain physical laws, a brain must satisfy certain physical laws to be able to perform the "predictability" function of language. It must be able to repeat whatever it does consistently. Simple enough. Computers do that. But just as in flight, the specific manner (insect wings, bird wings, bat wings, 747 wings) is somewhat incidental. Divergent structures can serve the same function. And it's also important to realize that these "mechanisms" did not necessarily arise to serve the function being observed (e.g., feathers and forelimbs did not evolve for flight, neither did aluminum technology.) So the mechanism is not necessarily determinative, the outcome is. And language is highly functional. If it (in an alternative universe) were managed by the structures of our feet (lots of them, let's say and very intricate) instead of our brains, its survival value as a trait would still be a strong reason for it to emerge. Also, to the extent that we are speaking of "grammar" as the morphology of language - subject, predicate; noun, verb; tense, etc., - aren't these the products first of all of the world we live in? Isn't that why language is shaped like it is, because of the reality it reflects? Isn't there a survival value in accurately discriminating between objects and actions, past and present, etc., when speaking to another? Doesn't the world we live in ultimately determine the basic structure of our language, in order to acheive effective predictable common reference? How would effective human language be different if one did not posit any inherited language trait? Wouldn't it be pretty much the same, because that is what communication regarding the real world demands? The reason that airplane wings look like bird wings when soaring is because they both serve the same function and conform to the same environmental demands. But bird wings are an inherited (genetic) trait, airplane wings are not. Shouldn't we expect the same convergence with language? That the demands of communication will shape language? Rather than expecting that some fortuitous internal structure determined what language is like. 2. With regard to "emergence" as well as I understand it, the same first basic questions would be asked. Any mechanism of genetic or individual emergence can be assumed to have functional reasons for being. The best analogy I can think of at the moment is the biological mechanisms of childbirth. What is the functional advantage of the not giving birth to fully adult progeny? What is the advantage of the pupae stage in some organisms? All of these are as connected to the survival needs of the parent as to the progeny. That is why I would look for the demands that are put on any effective emergence mechanism by the needs of the group. Are there individual or interpersonal advantages in postponing language emergence? Is language merely a by-product of early physical dependency in humans (a biological trait that fosters sociality in animals that may not otherwised be pre-wired to be social?) What has more survival value? A prewired full-blown language-using human individual from the time of birth (like some instanly communicationing insects?) Or one that goes through other, interpersonal processes before language emerges? Is language impossible without those interpersonal processes? And of course does language actually "emerge" or is it merely awaiting proper physiological development? The mechanisms of emergence might become more identifiable if these functions are considered. (Even if the experiment - the controlled removal from birth of all interpersonal aspects of language from a human subject - is one we cannot ethically perform. In other words, if a human could only communicate with himself from birth, without any advantage in communicating with another human, would "language" emerge? And what would it be like?) Hope this makes some sense and thanks, Steve Long From Sally.Rice at UALBERTA.CA Sun Aug 22 21:40:14 1999 From: Sally.Rice at UALBERTA.CA (Sally Rice) Date: Sun, 22 Aug 1999 15:40:14 -0600 Subject: Chair position, University of Alberta Message-ID: Chair, Department of Linguistics University of Alberta Edmonton, Canada Applications and nominations are invited for the position of Chair of the Department of Linguistics at the University of Alberta. This tenured appointment will be made at the rank of senior Associate or full Professor, effective 1 July 2000. The floor of the salary scale for the rank of Professor for the 1999/2000 academic year is $65,044. Candidates should have a distinguished record of scholarship and professional achievement in both experimental and theoretical linguistics, and a research specialization compatible with one of the continuing research strengths of the department. The Faculty of Arts at the University of Alberta is engaged in an extensive process of renewal, and is committed to ensuring that the substantial number of hirings projected over the next several years will ensure for the future the lively and productive intellectual environment on which the Faculty prides itself. The Department of Linguistics has a strong commitment to empirical and experimental approaches to linguistic research. Department members are engaged in ongoing research projects, many grant-funded, in experimental phonetics, language acquisition, discourse processing, and the study of the phonological, morphological, and semantic aspects of the mental lexicon. A Chair is sought who has a philosophical commitment to experimental research in the service of theory, and who will act as a bridge builder with allied fields in the cognitive science domain. The Department offers both graduate (PhD and MSc) and undergraduate degrees, and values its reputation for excellence in teaching and graduate training. It provides an environment of leading-edge research and innovative teaching/learning at both the undergraduate and graduate levels. The University of Alberta is committed to the principle of equity in employment. As an employer we welcome diversity in the workplace and encourage applications from all qualified women and men, including Aboriginal persons, persons with disabilities and members of visible minorities. Please send nominations or applications (including CVs and the names of three referees) by 1 November 1999 to: Kenneth Norrie Dean of Arts University of Alberta Edmonton, Alberta T6G 2E5 Canada